7 resultados para Computational modeling
em Duke University
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Memory-Based Attentional Guidance: A Window to the Relationship between Working Memory and Attention
Resumo:
Attention, the cognitive means by which we prioritize the processing of a subset of information, is necessary for operating efficiently and effectively in the world. Thus, a critical theoretical question is how information is selected. In the visual domain, working memory (WM)—which refers to the short-term maintenance and manipulation of information that is no longer accessible by the senses—has been highlighted as an important determinant of what is selected by visual attention. Furthermore, although WM and attention have traditionally been conceived as separate cognitive constructs, an abundance of behavioral and neural evidence indicates that these two domains are in fact intertwined and overlapping. The aim of this dissertation is to better understand the nature of WM and attention, primarily through the phenomenon of memory-based attentional guidance, whereby the active maintenance of items in visual WM reliably biases the deployment of attention to memory-matching items in the visual environment. The research presented here employs a combination of behavioral, functional imaging, and computational modeling techniques that address: (1) WM guidance effects with respect to the traditional dichotomy of top-down versus bottom-up attentional control; (2) under what circumstances the contents of WM impact visual attention; and (3) the broader hypothesis of a predictive and competitive interaction between WM and attention. Collectively, these empirical findings reveal the importance of WM as a distinct factor in attentional control and support current models of multiple-state WM, which may have broader implications for how we select and maintain information.
Resumo:
Polarization is important for the function and morphology of many different cell types. The keys regulators of polarity in eukaryotes are the Rho-family GTPases. In the budding yeast Saccharomyces cerevisiae, which must polarize in order to bud and to mate, the master regulator is the highly conserved Rho GTPase, Cdc42. During polarity establishment, active Cdc42 accumulates at a site on the plasma membrane characterizing the “front” of the cell where the bud will emerge. The orientation of polarization is guided by upstream cues that dictate the site of Cdc42 clustering. However, in the absence of upstream cues, yeast can still polarize in a random direction during symmetry breaking. Symmetry breaking suggests cells possess an autocatalytic polarization mechanism that can amplify stochastic fluctuations of polarity proteins through a positive feedback mechanism.
Two different positive feedback mechanisms have been proposed to polarize Cdc42 in budding yeast. One model posits that Cdc42 activation must be localized to a site at the plasma membrane. Another model posits that Cdc42 delivery must be localized to a particular site at the plasma membrane. Although both mechanisms could work in parallel to polarize Cdc42, it is unclear which mechanism is critical to polarity establishment. We directly tested the predictions of the two positive feedback models using genetics and live microscopy. We found that localized Cdc42 activation is necessary for polarity establishment.
While this explains how active Cdc42 localizes to a particular site at the plasma membrane, it does not address how Cdc42 concentrates at that site. Several different mechanisms have been proposed to concentrate Cdc42. The GDI can extract Cdc42 from membranes and selective mobilize GDP-Cdc42 in the cytoplasm. It was proposed that selectively mobilizing GDP-Cdc42 in combination with local activation could locally concentrate total Cdc42 at the polarity site. Although the GDI is important for rapid Cdc42 accumulation at the polarity site, it is not essential to Cdc42 concentration. It was proposed that delivery of Cdc42 by actin-mediated vesicle can act as a backup pathway to concentrate Cdc42. However, we found no evidence for an actin-dependent concentrating pathway. Live microscopy experiments reveal that prenylated proteins are not restricted to membranes, and can enter the cytoplasm. We found that the GDI-independent concentrating pathway still requires Cdc42 to exchange between the plasma membrane and the cytoplasm, which is supported by computational modeling. In the absence of the GDI, we found that Cdc42 GAP became essential for polarization. We propose that the GAP limits GTP-Cdc42 leak into the cytoplasm, which would be prohibitive to Cdc42 polarization.
Resumo:
Transcription factors (TFs) control the temporal and spatial expression of target genes by interacting with DNA in a sequence-specific manner. Recent advances in high throughput experiments that measure TF-DNA interactions in vitro and in vivo have facilitated the identification of DNA binding sites for thousands of TFs. However, it remains unclear how each individual TF achieves its specificity, especially in the case of paralogous TFs that recognize distinct target genomic sites despite sharing very similar DNA binding motifs. In my work, I used a combination of high throughput in vitro protein-DNA binding assays and machine-learning algorithms to characterize and model the binding specificity of 11 paralogous TFs from 4 distinct structural families. My work proves that even very closely related paralogous TFs, with indistinguishable DNA binding motifs, oftentimes exhibit differential binding specificity for their genomic target sites, especially for sites with moderate binding affinity. Importantly, the differences I identify in vitro and through computational modeling help explain, at least in part, the differential in vivo genomic targeting by paralogous TFs. Future work will focus on in vivo factors that might also be important for specificity differences between paralogous TFs, such as DNA methylation, interactions with protein cofactors, or the chromatin environment. In this larger context, my work emphasizes the importance of intrinsic DNA binding specificity in targeting of paralogous TFs to the genome.
Resumo:
This thesis focuses on the development of algorithms that will allow protein design calculations to incorporate more realistic modeling assumptions. Protein design algorithms search large sequence spaces for protein sequences that are biologically and medically useful. Better modeling could improve the chance of success in designs and expand the range of problems to which these algorithms are applied. I have developed algorithms to improve modeling of backbone flexibility (DEEPer) and of more extensive continuous flexibility in general (EPIC and LUTE). I’ve also developed algorithms to perform multistate designs, which account for effects like specificity, with provable guarantees of accuracy (COMETS), and to accommodate a wider range of energy functions in design (EPIC and LUTE).
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge