10 resultados para Scale development
em Duke University
Resumo:
Thermodynamic stability measurements on proteins and protein-ligand complexes can offer insights not only into the fundamental properties of protein folding reactions and protein functions, but also into the development of protein-directed therapeutic agents to combat disease. Conventional calorimetric or spectroscopic approaches for measuring protein stability typically require large amounts of purified protein. This requirement has precluded their use in proteomic applications. Stability of Proteins from Rates of Oxidation (SPROX) is a recently developed mass spectrometry-based approach for proteome-wide thermodynamic stability analysis. Since the proteomic coverage of SPROX is fundamentally limited by the detection of methionine-containing peptides, the use of tryptophan-containing peptides was investigated in this dissertation. A new SPROX-like protocol was developed that measured protein folding free energies using the denaturant dependence of the rate at which globally protected tryptophan and methionine residues are modified with dimethyl (2-hydroxyl-5-nitrobenzyl) sulfonium bromide and hydrogen peroxide, respectively. This so-called Hybrid protocol was applied to proteins in yeast and MCF-7 cell lysates and achieved a ~50% increase in proteomic coverage compared to probing only methionine-containing peptides. Subsequently, the Hybrid protocol was successfully utilized to identify and quantify both known and novel protein-ligand interactions in cell lysates. The ligands under study included the well-known Hsp90 inhibitor geldanamycin and the less well-understood omeprazole sulfide that inhibits liver-stage malaria. In addition to protein-small molecule interactions, protein-protein interactions involving Puf6 were investigated using the SPROX technique in comparative thermodynamic analyses performed on wild-type and Puf6-deletion yeast strains. A total of 39 proteins were detected as Puf6 targets and 36 of these targets were previously unknown to interact with Puf6. Finally, to facilitate the SPROX/Hybrid data analysis process and minimize human errors, a Bayesian algorithm was developed for transition midpoint assignment. In summary, the work in this dissertation expanded the scope of SPROX and evaluated the use of SPROX/Hybrid protocols for characterizing protein-ligand interactions in complex biological mixtures.
Resumo:
The Veterans Health Administration (VHA) in the Department of Veteran Affairs (VA) has emerged as a national and international leader in the delivery and research of telehealth-based treatment. Several unique characteristics of care in VA settings intersect to create an ideal environment for telehealth modalities and research. However, the value of telehealth experience and initiatives in VA settings is limited if telehealth strategies cannot be widely exported to other public or private systems. Whereas a hierarchical organization, such as VA, can innovate and fund change relatively quickly based on provider and patient preferences and a growing knowledge base, other health provider organizations and third-party payers may likely require replicable scientific findings over time before incremental investments will be made to create infrastructure, reform regulatory barriers, and amend laws to accommodate expansion of telehealth modalities. Accordingly, large-scale scientifically rigorous telehealth research in VHA settings is essential not only to investigate the efficacy of existing and future telehealth practices in VHA, but also to hasten the development of telehealth infrastructure in private and other public health settings. We propose an expanded partnership between the VA, NIH, and other funding agencies to investigate creative and pragmatic uses of telehealth technology. To this end, we identify six specific areas of research we believe to be particularly relevant to the efficient development of telehealth modalities in civilian and military contexts outside VHA.
Resumo:
The population structure of an organism reflects its evolutionary history and influences its evolutionary trajectory. It constrains the combination of genetic diversity and reveals patterns of past gene flow. Understanding it is a prerequisite for detecting genomic regions under selection, predicting the effect of population disturbances, or modeling gene flow. This paper examines the detailed global population structure of Arabidopsis thaliana. Using a set of 5,707 plants collected from around the globe and genotyped at 149 SNPs, we show that while A. thaliana as a species self-fertilizes 97% of the time, there is considerable variation among local groups. This level of outcrossing greatly limits observed heterozygosity but is sufficient to generate considerable local haplotypic diversity. We also find that in its native Eurasian range A. thaliana exhibits continuous isolation by distance at every geographic scale without natural breaks corresponding to classical notions of populations. By contrast, in North America, where it exists as an exotic species, A. thaliana exhibits little or no population structure at a continental scale but local isolation by distance that extends hundreds of km. This suggests a pattern for the development of isolation by distance that can establish itself shortly after an organism fills a new habitat range. It also raises questions about the general applicability of many standard population genetics models. Any model based on discrete clusters of interchangeable individuals will be an uneasy fit to organisms like A. thaliana which exhibit continuous isolation by distance on many scales.
Resumo:
BACKGROUND: Web-based decision aids are increasingly important in medical research and clinical care. However, few have been studied in an intensive care unit setting. The objectives of this study were to develop a Web-based decision aid for family members of patients receiving prolonged mechanical ventilation and to evaluate its usability and acceptability. METHODS: Using an iterative process involving 48 critical illness survivors, family surrogate decision makers, and intensivists, we developed a Web-based decision aid addressing goals of care preferences for surrogate decision makers of patients with prolonged mechanical ventilation that could be either administered by study staff or completed independently by family members (Development Phase). After piloting the decision aid among 13 surrogate decision makers and seven intensivists, we assessed the decision aid's usability in the Evaluation Phase among a cohort of 30 surrogate decision makers using the Systems Usability Scale (SUS). Acceptability was assessed using measures of satisfaction and preference for electronic Collaborative Decision Support (eCODES) versus the original printed decision aid. RESULTS: The final decision aid, termed 'electronic Collaborative Decision Support', provides a framework for shared decision making, elicits relevant values and preferences, incorporates clinical data to personalize prognostic estimates generated from the ProVent prediction model, generates a printable document summarizing the user's interaction with the decision aid, and can digitally archive each user session. Usability was excellent (mean SUS, 80 ± 10) overall, but lower among those 56 years and older (73 ± 7) versus those who were younger (84 ± 9); p = 0.03. A total of 93% of users reported a preference for electronic versus printed versions. CONCLUSIONS: The Web-based decision aid for ICU surrogate decision makers can facilitate highly individualized information sharing with excellent usability and acceptability. Decision aids that employ an electronic format such as eCODES represent a strategy that could enhance patient-clinician collaboration and decision making quality in intensive care.
Resumo:
BACKGROUND: Anticoagulation can reduce quality of life, and different models of anticoagulation management might have different impacts on satisfaction with this component of medical care. Yet, to our knowledge, there are no scales measuring quality of life and satisfaction with anticoagulation that can be generalized across different models of anticoagulation management. We describe the development and preliminary validation of such an instrument - the Duke Anticoagulation Satisfaction Scale (DASS). METHODS: The DASS is a 25-item scale addressing the (a) negative impacts of anticoagulation (limitations, hassles and burdens); and (b) positive impacts of anticoagulation (confidence, reassurance, satisfaction). Each item has 7 possible responses. The DASS was administered to 262 patients currently receiving oral anticoagulation. Scales measuring generic quality of life, satisfaction with medical care, and tendency to provide socially desirable responses were also administered. Statistical analysis included assessment of item variability, internal consistency (Cronbach's alpha), scale structure (factor analysis), and correlations between the DASS and demographic variables, clinical characteristics, and scores on the above scales. A follow-up study of 105 additional patients assessed test-retest reliability. RESULTS: 220 subjects answered all items. Ceiling and floor effects were modest, and 25 of the 27 proposed items grouped into 2 factors (positive impacts, negative impacts, this latter factor being potentially subdivided into limitations versus hassles and burdens). Each factor had a high degree of internal consistency (Cronbach's alpha 0.78-0.91). The limitations and hassles factors consistently correlated with the SF-36 scales measuring generic quality of life, while the positive psychological impact scale correlated with age and time on anticoagulation. The intra-class correlation coefficient for test-retest reliability was 0.80. CONCLUSIONS: The DASS has demonstrated reasonable psychometric properties to date. Further validation is ongoing. To the degree that dissatisfaction with anticoagulation leads to decreased adherence, poorer INR control, and poor clinical outcomes, the DASS has the potential to help identify reasons for dissatisfaction (and positive satisfaction), and thus help to develop interventions to break this cycle. As an instrument designed to be applicable across multiple models of anticoagulation management, the DASS could be crucial in the scientific comparison between those models of care.
Resumo:
The Bakken region of North Dakota and Montana has experienced perhaps the greatest effects of increased oil and gas development in the United States, with major implications for local governments. Though development of the Bakken began in the early 2000s, large-scale drilling and population growth dramatically affected the region from roughly 2008 through today. This case study examines the local government fiscal benefits and challenges experienced by Dunn County and Watford City, which lie near the heart of the producing region. For both local governments, the initial growth phase presented major fiscal challenges due to rapidly expanding service demands and insufficient revenue. In the following years, these challenges eased as demand for services slowed due to declining industry activity and state tax policies redirected more funds to localities. Looking forward, both local governments describe their fiscal health as stronger because of the Bakken boom, though higher debt loads and an economy heavily dependent on the volatile oil and gas industry each pose challenges for future fiscal stability.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The evolution of reproductive strategies involves a complex calculus of costs and benefits to both parents and offspring. Many marine animals produce embryos packaged in tough egg capsules or gelatinous egg masses attached to benthic surfaces. While these egg structures can protect against environmental stresses, the packaging is energetically costly for parents to produce. In this series of studies, I examined a variety of ecological factors affecting the evolution of benthic development as a life history strategy. I used marine gastropods as my model system because they are incredibly diverse and abundant worldwide, and they exhibit a variety of reproductive and developmental strategies.
The first study examines predation on benthic egg masses. I investigated: 1) behavioral mechanisms of predation when embryos are targeted (rather than the whole egg mass); 2) the specific role of gelatinous matrix in predation. I hypothesized that gelatinous matrix does not facilitate predation. One study system was the sea slug Olea hansineensis, an obligate egg mass predator, feeding on the sea slug Haminoea vesicula. Olea fed intensely and efficiently on individual Haminoea embryos inside egg masses but showed no response to live embryos removed from gel, suggesting that gelatinous matrix enables predation. This may be due to mechanical support of the feeding predator by the matrix. However, Haminoea egg masses outnumber Olea by two orders of magnitude in the field, and each egg mass can contain many tens of thousands of embryos, so predation pressure on individuals is likely not strong. The second system involved the snail Nassarius vibex, a non-obligate egg mass predator, feeding on the polychaete worm Clymenella mucosa. Gel neither inhibits nor promotes embryo predation for Nassarius, but because it cannot target individual embryos inside an egg mass, its feeding is slow and inefficient, and feeding rates in the field are quite low. However, snails that compete with Nassarius for scavenged food have not been seen to eat egg masses in the field, leaving Nassarius free to exploit the resource. Overall, egg mass predation in these two systems likely benefits the predators much more than it negatively affects the prey. Thus, selection for environmentally protective aspects of egg mass production may be much stronger than selection for defense against predation.
In the second study, I examined desiccation resistance in intertidal egg masses made by Haminoea vesicula, which preferentially attaches its flat, ribbon-shaped egg masses to submerged substrata. Egg masses occasionally detach and become stranded on exposed sand at low tide. Unlike adults, the encased embryos cannot avoid desiccation by selectively moving about the habitat, and the egg mass shape has high surface-area-to-volume ratio that should make it prone to drying out. Thus, I hypothesized that the embryos would not survive stranding. I tested this by deploying individual egg masses of two age classes on exposed sand bars for the duration of low tide. After rehydration, embryos midway through development showed higher rates of survival than newly-laid embryos, though for both stages survival rates over 25% were frequently observed. Laboratory desiccation trials showed that >75% survival is possible in an egg mass that has lost 65% of its water weight, and some survival (<25%) was observed even after 83% water weight lost. Although many surviving embryos in both experiments showed damage, these data demonstrate that egg mass stranding is not necessarily fatal to embryos. They may be able to survive a far greater range of conditions than they normally encounter, compensating for their lack of ability to move. Also, desiccation tolerance of embryos may reduce pressure on parents to find optimal laying substrata.
The third study takes a big-picture approach to investigating the evolution of different developmental strategies in cone snails, the largest genus of marine invertebrates. Cone snail species hatch out of their capsules as either swimming larvae or non-dispersing forms, and their developmental mode has direct consequences for biogeographic patterns. Variability in life history strategies among taxa may be influenced by biological, environmental, or phylogenetic factors, or a combination of these. While most prior research has examined these factors singularly, my aim was to investigate the effects of a host of intrinsic, extrinsic, and historical factors on two fundamental aspects of life history: egg size and egg number. I used phylogenetic generalized least-squares regression models to examine relationships between these two egg traits and a variety of hypothesized intrinsic and extrinsic variables. Adult shell morphology and spatial variability in productivity and salinity across a species geographic range had the strongest effects on egg diameter and number of eggs per capsule. Phylogeny had no significant influence. Developmental mode in Conus appears to be influenced mostly by species-level adaptations and niche specificity rather than phylogenetic conservatism. Patterns of egg size and egg number appear to reflect energetic tradeoffs with body size and specific morphologies as well as adaptations to variable environments. Overall, this series of studies highlights the importance of organism-scale biotic and abiotic interactions in evolutionary patterns.
Resumo:
Optical coherence tomography (OCT) is a noninvasive three-dimensional interferometric imaging technique capable of achieving micrometer scale resolution. It is now a standard of care in ophthalmology, where it is used to improve the accuracy of early diagnosis, to better understand the source of pathophysiology, and to monitor disease progression and response to therapy. In particular, retinal imaging has been the most prevalent clinical application of OCT, but researchers and companies alike are developing OCT systems for cardiology, dermatology, dentistry, and many other medical and industrial applications.
Adaptive optics (AO) is a technique used to reduce monochromatic aberrations in optical instruments. It is used in astronomical telescopes, laser communications, high-power lasers, retinal imaging, optical fabrication and microscopy to improve system performance. Scanning laser ophthalmoscopy (SLO) is a noninvasive confocal imaging technique that produces high contrast two-dimensional retinal images. AO is combined with SLO (AOSLO) to compensate for the wavefront distortions caused by the optics of the eye, providing the ability to visualize the living retina with cellular resolution. AOSLO has shown great promise to advance the understanding of the etiology of retinal diseases on a cellular level.
Broadly, we endeavor to enhance the vision outcome of ophthalmic patients through improved diagnostics and personalized therapy. Toward this end, the objective of the work presented herein was the development of advanced techniques for increasing the imaging speed, reducing the form factor, and broadening the versatility of OCT and AOSLO. Despite our focus on applications in ophthalmology, the techniques developed could be applied to other medical and industrial applications. In this dissertation, a technique to quadruple the imaging speed of OCT was developed. This technique was demonstrated by imaging the retinas of healthy human subjects. A handheld, dual depth OCT system was developed. This system enabled sequential imaging of the anterior segment and retina of human eyes. Finally, handheld SLO/OCT systems were developed, culminating in the design of a handheld AOSLO system. This system has the potential to provide cellular level imaging of the human retina, resolving even the most densely packed foveal cones.
Resumo:
The full-scale base-isolated structure studied in this dissertation is the only base-isolated building in South Island of New Zealand. It sustained hundreds of earthquake ground motions from September 2010 and well into 2012. Several large earthquake responses were recorded in December 2011 by NEES@UCLA and by GeoNet recording station nearby Christchurch Women's Hospital. The primary focus of this dissertation is to advance the state-of-the art of the methods to evaluate performance of seismic-isolated structures and the effects of soil-structure interaction by developing new data processing methodologies to overcome current limitations and by implementing advanced numerical modeling in OpenSees for direct analysis of soil-structure interaction.
This dissertation presents a novel method for recovering force-displacement relations within the isolators of building structures with unknown nonlinearities from sparse seismic-response measurements of floor accelerations. The method requires only direct matrix calculations (factorizations and multiplications); no iterative trial-and-error methods are required. The method requires a mass matrix, or at least an estimate of the floor masses. A stiffness matrix may be used, but is not necessary. Essentially, the method operates on a matrix of incomplete measurements of floor accelerations. In the special case of complete floor measurements of systems with linear dynamics, real modes, and equal floor masses, the principal components of this matrix are the modal responses. In the more general case of partial measurements and nonlinear dynamics, the method extracts a number of linearly-dependent components from Hankel matrices of measured horizontal response accelerations, assembles these components row-wise and extracts principal components from the singular value decomposition of this large matrix of linearly-dependent components. These principal components are then interpolated between floors in a way that minimizes the curvature energy of the interpolation. This interpolation step can make use of a reduced-order stiffness matrix, a backward difference matrix or a central difference matrix. The measured and interpolated floor acceleration components at all floors are then assembled and multiplied by a mass matrix. The recovered in-service force-displacement relations are then incorporated into the OpenSees soil structure interaction model.
Numerical simulations of soil-structure interaction involving non-uniform soil behavior are conducted following the development of the complete soil-structure interaction model of Christchurch Women's Hospital in OpenSees. In these 2D OpenSees models, the superstructure is modeled as two-dimensional frames in short span and long span respectively. The lead rubber bearings are modeled as elastomeric bearing (Bouc Wen) elements. The soil underlying the concrete raft foundation is modeled with linear elastic plane strain quadrilateral element. The non-uniformity of the soil profile is incorporated by extraction and interpolation of shear wave velocity profile from the Canterbury Geotechnical Database. The validity of the complete two-dimensional soil-structure interaction OpenSees model for the hospital is checked by comparing the results of peak floor responses and force-displacement relations within the isolation system achieved from OpenSees simulations to the recorded measurements. General explanations and implications, supported by displacement drifts, floor acceleration and displacement responses, force-displacement relations are described to address the effects of soil-structure interaction.