918 resultados para Computational Chemistry, Modeling
Resumo:
Ribonucleotide reductases (RNR) are essential enzymes that catalyze the reduction of ribonucleotides to 2'-deoxyribonucleotides, which is a critical step that produces precursors for DNA replication and repair. The inactivation of RNR, logically, would discontinue producing the precursors of the DNA of viral or cancer cells, which then would consequently end the cycle of DNA replication. Among different compounds that were found to be inhibitors of RNR, 2'-azido-2'-deoxynucleotide diphosphates (N3NDPs) have been investigated in depth as potent inhibitors of RNR. Decades of investigation has suggested that the inactivation of RNR by N3NDPs is a result of the formation of a nitrogen-centered radical (N·) that is covalently attached to the nucleotide at C3' and cysteine molecule C225 [3'-C(R-S-N·-C-OH)]. Biomimetic simulation reactions for the generation of the nitrogen-centered radicals similar to the one observed during the inactivation of the RNR by azionuclotides was investigated. The study included several modes: (i) theoretical calculation that showed the feasibility of the ring closure reaction between thiyl radicals and azido group; (ii) synthesis of the model azido nucleosides with a linker attached to C3' or C5' having a thiol or vicinal dithiol functionality; (iii) generation of the thiyl radical under both physiological and radiolysis conditions whose role is important in the initiation on RNR cascades; and (iv) analysis of the nitrogen-centered radical species formed during interaction between the thiyl radical and azido group by electron paramagnetic resonance spectroscopy (EPR). Characterization of the aminyl radical species formed during one electron attachment to the azido group of 2'-azido-2'-deoxyuridine and its stereospecifically labelled 1'-, 2'-, 3'-, 4'- or 5,6-[2H 2]-analogues was also examined. This dissertation gave insight toward understanding the mechanism of the formation of the nitrogen-centered radical during the inactivation of RNRs by azidonucleotides as well as the mechanism of action of RNRs that might provide key information necessary for the development of the next generation of antiviral and anticancer drugs.
Resumo:
Buildings and other infrastructures located in the coastal regions of the US have a higher level of wind vulnerability. Reducing the increasing property losses and causalities associated with severe windstorms has been the central research focus of the wind engineering community. The present wind engineering toolbox consists of building codes and standards, laboratory experiments, and field measurements. The American Society of Civil Engineers (ASCE) 7 standard provides wind loads only for buildings with common shapes. For complex cases it refers to physical modeling. Although this option can be economically viable for large projects, it is not cost-effective for low-rise residential houses. To circumvent these limitations, a numerical approach based on the techniques of Computational Fluid Dynamics (CFD) has been developed. The recent advance in computing technology and significant developments in turbulence modeling is making numerical evaluation of wind effects a more affordable approach. The present study targeted those cases that are not addressed by the standards. These include wind loads on complex roofs for low-rise buildings, aerodynamics of tall buildings, and effects of complex surrounding buildings. Among all the turbulence models investigated, the large eddy simulation (LES) model performed the best in predicting wind loads. The application of a spatially evolving time-dependent wind velocity field with the relevant turbulence structures at the inlet boundaries was found to be essential. All the results were compared and validated with experimental data. The study also revealed CFD's unique flow visualization and aerodynamic data generation capabilities along with a better understanding of the complex three-dimensional aerodynamics of wind-structure interactions. With the proper modeling that realistically represents the actual turbulent atmospheric boundary layer flow, CFD can offer an economical alternative to the existing wind engineering tools. CFD's easy accessibility is expected to transform the practice of structural design for wind, resulting in more wind-resilient and sustainable systems by encouraging optimal aerodynamic and sustainable structural/building design. Thus, this method will help ensure public safety and reduce economic losses due to wind perils.
Resumo:
Recreational abuse of the drugs cocaine, methamphetamine, and morphine continues to be prevalent in the United States of America and around the world. While numerous methods of detection exist for each drug, they are generally limited by the lifetime of the parent drug and its metabolites in the body. However, the covalent modification of endogenous proteins by these drugs of abuse may act as biomarkers of exposure and allow for extension of detection windows for these drugs beyond the lifetime of parent molecules or metabolites in the free fraction. Additionally, existence of covalently bound molecules arising from drug ingestion can offer insight into downstream toxicities associated with each of these drugs. This research investigated the metabolism of cocaine, methamphetamine, and morphine in common in vitro assay systems, specifically focusing on the generation of reactive intermediates and metabolites that have the potential to form covalent protein adducts. Results demonstrated the formation of covalent adduction products between biological cysteine thiols and reactive moieties on cocaine and morphine metabolites. Rigorous mass spectrometric analysis in conjunction with in vitro metabolic activation, pharmacogenetic reaction phenotyping, and computational modeling were utilized to characterize structures and mechanisms of formation for each resultant thiol adduction product. For cocaine, data collected demonstrated the formation of adduction products from a reactive arene epoxide intermediate, designating a novel metabolic pathway for cocaine. In the case of morphine, data expanded on known adduct-forming pathways using sensitive and selective analysis techniques, following the known reactive metabolite, morphinone, and a proposed novel metabolite, morphine quinone methide. Data collected in this study describe novel metabolic events for multiple important drugs of abuse, culminating in detection methods and mechanistic descriptors useful to both medical and forensic investigators when examining the toxicology associated with cocaine, methamphetamine, and morphine.
Resumo:
One of the most popular techniques for creating spatialized virtual sounds is based on the use of Head-Related Transfer Functions (HRTFs). HRTFs are signal processing models that represent the modifications undergone by the acoustic signal as it travels from a sound source to each of the listener's eardrums. These modifications are due to the interaction of the acoustic waves with the listener's torso, shoulders, head and pinnae, or outer ears. As such, HRTFs are somewhat different for each listener. For a listener to perceive synthesized 3-D sound cues correctly, the synthesized cues must be similar to the listener's own HRTFs. ^ One can measure individual HRTFs using specialized recording systems, however, these systems are prohibitively expensive and restrict the portability of the 3-D sound system. HRTF-based systems also face several computational challenges. This dissertation presents an alternative method for the synthesis of binaural spatialized sounds. The sound entering the pinna undergoes several reflective, diffractive and resonant phenomena, which determine the HRTF. Using signal processing tools, such as Prony's signal modeling method, an appropriate set of time delays and a resonant frequency were used to approximate the measured Head-Related Impulse Responses (HRIRs). Statistical analysis was used to find out empirical equations describing how the reflections and resonances are determined by the shape and size of the pinna features obtained from 3D images of 15 experimental subjects modeled in the project. These equations were used to yield “Model HRTFs” that can create elevation effects. ^ Listening tests conducted on 10 subjects show that these model HRTFs are 5% more effective than generic HRTFs when it comes to localizing sounds in the frontal plane. The number of reversals (perception of sound source above the horizontal plane when actually it is below the plane and vice versa) was also reduced by 5.7%, showing the perceptual effectiveness of this approach. The model is simple, yet versatile because it relies on easy to measure parameters to create an individualized HRTF. This low-order parameterized model also reduces the computational and storage demands, while maintaining a sufficient number of perceptually relevant spectral cues. ^
Resumo:
A novel modeling approach is applied to karst hydrology. Long-standing problems in karst hydrology and solute transport are addressed using Lattice Boltzmann methods (LBMs). These methods contrast with other modeling approaches that have been applied to karst hydrology. The motivation of this dissertation is to develop new computational models for solving ground water hydraulics and transport problems in karst aquifers, which are widespread around the globe. This research tests the viability of the LBM as a robust alternative numerical technique for solving large-scale hydrological problems. The LB models applied in this research are briefly reviewed and there is a discussion of implementation issues. The dissertation focuses on testing the LB models. The LBM is tested for two different types of inlet boundary conditions for solute transport in finite and effectively semi-infinite domains. The LBM solutions are verified against analytical solutions. Zero-diffusion transport and Taylor dispersion in slits are also simulated and compared against analytical solutions. These results demonstrate the LBM’s flexibility as a solute transport solver. The LBM is applied to simulate solute transport and fluid flow in porous media traversed by larger conduits. A LBM-based macroscopic flow solver (Darcy’s law-based) is linked with an anisotropic dispersion solver. Spatial breakthrough curves in one and two dimensions are fitted against the available analytical solutions. This provides a steady flow model with capabilities routinely found in ground water flow and transport models (e.g., the combination of MODFLOW and MT3D). However the new LBM-based model retains the ability to solve inertial flows that are characteristic of karst aquifer conduits. Transient flows in a confined aquifer are solved using two different LBM approaches. The analogy between Fick’s second law (diffusion equation) and the transient ground water flow equation is used to solve the transient head distribution. An altered-velocity flow solver with source/sink term is applied to simulate a drawdown curve. Hydraulic parameters like transmissivity and storage coefficient are linked with LB parameters. These capabilities complete the LBM’s effective treatment of the types of processes that are simulated by standard ground water models. The LB model is verified against field data for drawdown in a confined aquifer.
Resumo:
This dissertation focused on developing an integrated surface – subsurface hydrologic simulation numerical model by programming and testing the coupling of the USGS MODFLOW-2005 Groundwater Flow Process (GWF) package (USGS, 2005) with the 2D surface water routing model: FLO-2D (O’Brien et al., 1993). The coupling included the necessary procedures to numerically integrate and verify both models as a single computational software system that will heretofore be referred to as WHIMFLO-2D (Wetlands Hydrology Integrated Model). An improved physical formulation of flow resistance through vegetation in shallow waters based on the concept of drag force was also implemented for the simulations of floodplains, while the use of the classical methods (e.g., Manning, Chezy, Darcy-Weisbach) to calculate flow resistance has been maintained for the canals and deeper waters. A preliminary demonstration exercise WHIMFLO-2D in an existing field site was developed for the Loxahatchee Impoundment Landscape Assessment (LILA), an 80 acre area, located at the Arthur R. Marshall Loxahatchee National Wild Life Refuge in Boynton Beach, Florida. After applying a number of simplifying assumptions, results have illustrated the ability of the model to simulate the hydrology of a wetland. In this illustrative case, a comparison between measured and simulated stages level showed an average error of 0.31% with a maximum error of 2.8%. Comparison of measured and simulated groundwater head levels showed an average error of 0.18% with a maximum of 2.9%.
Resumo:
Shipboard power systems have different characteristics than the utility power systems. In the Shipboard power system it is crucial that the systems and equipment work at their peak performance levels. One of the most demanding aspects for simulations of the Shipboard Power Systems is to connect the device under test to a real-time simulated dynamic equivalent and in an environment with actual hardware in the Loop (HIL). The real time simulations can be achieved by using multi-distributed modeling concept, in which the global system model is distributed over several processors through a communication link. The advantage of this approach is that it permits the gradual change from pure simulation to actual application. In order to perform system studies in such an environment physical phase variable models of different components of the shipboard power system were developed using operational parameters obtained from finite element (FE) analysis. These models were developed for two types of studies low and high frequency studies. Low frequency studies are used to examine the shipboard power systems behavior under load switching, and faults. High-frequency studies were used to predict abnormal conditions due to overvoltage, and components harmonic behavior. Different experiments were conducted to validate the developed models. The Simulation and experiment results show excellent agreement. The shipboard power systems components behavior under internal faults was investigated using FE analysis. This developed technique is very curial in the Shipboard power systems faults detection due to the lack of comprehensive fault test databases. A wavelet based methodology for feature extraction of the shipboard power systems current signals was developed for harmonic and fault diagnosis studies. This modeling methodology can be utilized to evaluate and predicate the NPS components future behavior in the design stage which will reduce the development cycles, cut overall cost, prevent failures, and test each subsystem exhaustively before integrating it into the system.
Resumo:
The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.
Resumo:
High-resolution sedimentary records of major and minor elements (Al, Ba, Ca, Sr, Ti), total organic carbon (TOC), and profiles of pore water constituents (SO42-, CH4, Ca2+, Ba2+, Mg2+, alkalinity) were obtained for two gravity cores (core 755, 501 m water depth and core 214, 1686 m water depth) from the northwestern Black Sea. The records were examined in order to gain insight into the cycling of Ba in anoxic marine sediments characterized by a shallow sulfate-methane transition (SMT) as well as the applicability of barite as a primary productivity proxy in such a setting. The Ba records are strongly overprinted by diagenetic barite (BaSO4) precipitation and remobilization; authigenic Ba enrichments were found at both sites at and slightly above the current SMT. Transport reaction modeling was applied to simulate the migration of the SMT during the changing geochemical conditions after the Holocene seawater intrusion into the Black Sea. Based on this, sediment intervals affected by diagenetic Ba redistribution were identified. Results reveal that the intense overprint of Ba and Baxs (Ba excess above detrital average) strongly limits its correlation to primary productivity. These findings have implications for other modern and ancient anoxic basins, such as sections covering the Oceanic Anoxic Events for which Ba is frequently used as a primary productivity indicator. Our study also demonstrates the limitations concerning the use of Baxs as a tracer for downward migrations of the SMT: due to high sedimentation rates at the investigated sites, diagenetic barite fronts are buried below the SMT within a relatively short period. Thus, 'relict' barite fronts would only be preserved for a few thousands of years, if at all.
Resumo:
Ocean acidification in response to rising atmospheric CO2 partial pressures is widely expected to reduce calcification by marine organisms. From the mid-Mesozoic, coccolithophores have been major calcium carbonate producers in the world's oceans, today accounting for about a third of the total marine CaCO3 production. Here, we present laboratory evidence that calcification and net primary production in the coccolithophore species Emiliania huxleyi are significantly increased by high CO2 partial pressures. Field evidence from the deep ocean is consistent with these laboratory conclusions, indicating that over the past 220 years there has been a 40% increase in average coccolith mass. Our findings show that coccolithophores are already responding and will probably continue to respond to rising atmospheric CO2 partial pressures, which has important implications for biogeochemical modeling of future oceans and climate.
Resumo:
Peer reviewed
Resumo:
Postprint
Resumo:
This thesis focuses on the development of algorithms that will allow protein design calculations to incorporate more realistic modeling assumptions. Protein design algorithms search large sequence spaces for protein sequences that are biologically and medically useful. Better modeling could improve the chance of success in designs and expand the range of problems to which these algorithms are applied. I have developed algorithms to improve modeling of backbone flexibility (DEEPer) and of more extensive continuous flexibility in general (EPIC and LUTE). I’ve also developed algorithms to perform multistate designs, which account for effects like specificity, with provable guarantees of accuracy (COMETS), and to accommodate a wider range of energy functions in design (EPIC and LUTE).
Resumo:
The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.
Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.
Resumo:
In the last two decades, the field of homogeneous gold catalysis has been
extremely active, growing at a rapid pace. Another rapidly-growing field—that of
computational chemistry—has often been applied to the investigation of various gold-
catalyzed reaction mechanisms. Unfortunately, a number of recent mechanistic studies
have utilized computational methods that have been shown to be inappropriate and
inaccurate in their description of gold chemistry. This work presents an overview of
available computational methods with a focus on the approximations and limitations
inherent in each, and offers a review of experimentally-characterized gold(I) complexes
and proposed mechanisms as compared with their computationally-modeled
counterparts. No aim is made to identify a “recommended” computational method for
investigations of gold catalysis; rather, discrepancies between experimentally and
computationally obtained values are highlighted, and the systematic errors between
different computational methods are discussed.