917 resultados para Computational Dialectometry
Fast Structure-Based Assignment of 15N HSQC Spectra of Selectively 15N-Labeled Paramagnetic Proteins
Resumo:
A novel strategy for fast NMR resonance assignment of N-15 HSQC spectra of proteins is presented. It requires the structure coordinates of the protein, a paramagnetic center, and one or more residue-selectively N-15-labeled samples. Comparison of sensitive undecoupled N-15 HSQC spectra recorded of paramagnetic and diamagnetic samples yields data for every cross-peak on pseudocontact shift, paramagnetic relaxation enhancement, cross-correlation between Curie-spin and dipole-dipole relaxation, and residual dipolar coupling. Comparison of these four different paramagnetic quantities with predictions from the three-dimensional structure simultaneously yields the resonance assignment and the anisotropy of the susceptibility tensor of the paramagnetic center. The method is demonstrated with the 30 kDa complex between the N-terminal domain of the epsilon subunit and the theta subunit of Escherichia Coll DNA polymerase III. The program PLATYPUS was developed to perform the assignment, provide a measure of reliability of the assignment, and determine the susceptibility tensor anisotropy.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.
Resumo:
Silicic volcanic eruptions are typically accompanied by repetitive Long-Period (LP) seismicity that originates from a small region of the upper conduit. These signals have the capability to advance eruption prediction, since they commonly precede a change in the eruption vigour. Shear bands forming along the conduit wall, where the shear stresses are highest, have been linked to providing the seismic trigger. However, existing computational models are unable to generate shear bands at the depths where the LP signals originate using simple magma strength models. Presented here is a model in which the magma strength is determined from a constitutive relationship dependent upon crystallinity and pressure. This results in a depth-dependent magma strength, analogous to planetary lithospheres. Hence, in shallow highly-crystalline regions a macroscopically discontinuous brittle type of deformation will prevail, whilst in deeper crystal-poor regions there will be a macroscopically continuous plastic deformation mechanism. This will result in a depth where the brittle-ductile transition occurs, and here shear bands disconnected from the free-surface may develop. We utilize the Finite Element Method and use axi-symmetric coordinates to model magma flow as a viscoplastic material, simulating quasi-static shear bands along the walls of a volcanic conduit. Model results constrained to the Soufrière Hills Volcano, Montserrat, show the generation of two types of shear bands: upper-conduit shear bands that form between the free-surface to a few 100 metres below it and discrete shear bands that form at the depths where LP seismicity is measured to occur corresponding to the brittle-ductile transition and the plastic shear region. It is beyond the limitation of the model to simulate a seismic event, although the modelled viscosity within the discrete shear bands suggests a failure and healing cycle time that supports the observed LP seismicity repeat times. However, due to the paucity of data and large parameter space available these results can only be considered to be qualitative rather than quantitative at this stage.
Resumo:
Formaldehyde-derived oxazolidine derivatives 4-7 of the beta-adrenoreceptor antagonists metoprolol 1, atenolol 2 and timolol 3 have been synthesised. Conformational analysis of 1-3 and the oxazolidine derivatives 4-7 has been performed using H-1 NMR spectroscopy and computational methods. The H-1 NMR studies show that for the aryloxypropanolamine beta-adrenoreceptor antagonists there is a predominance of the conformer in which the amine group is approximately antiperiplanar or trans to the aryloxymethylene group. Both H-1 NMR data and theoretical studies indicate that the oxazolidine derivatives 4-7 and the aryloxypropanolamine beta-adrenoreceptor antagonists 1-3 adopt similar conformations around the beta-amino alcohol moiety. Thus, oxazolidine ring formation does not dramatically alter the preferred conformation adopted by the beta-amino alcohol moiety of 1-3. Oxazolidine derivatives of aryloxypropanolamine beta-adrenoreceptor antagonists may therefore be appropriate as prodrugs, or semi-rigid analogues, when greater lipophilicity is required for drug delivery.
Resumo:
Extended gcd calculation has a long history and plays an important role in computational number theory and linear algebra. Recent results have shown that finding optimal multipliers in extended gcd calculations is difficult. We present an algorithm which uses lattice basis reduction to produce small integer multipliers x(1), ..., x(m) for the equation s = gcd (s(1), ..., s(m)) = x(1)s(1) + ... + x(m)s(m), where s1, ... , s(m) are given integers. The method generalises to produce small unimodular transformation matrices for computing the Hermite normal form of an integer matrix.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.
Resumo:
We use theoretical and numerical methods to investigate the general pore-fluid flow patterns near geological lenses in hydrodynamic and hydrothermal systems respectively. Analytical solutions have been rigorously derived for the pore-fluid velocity, stream function and excess pore-fluid pressure near a circular lens in a hydrodynamic system. These analytical solutions provide not only a better understanding of the physics behind the problem, but also a valuable benchmark solution for validating any numerical method. Since a geological lens is surrounded by a medium of large extent in nature and the finite element method is efficient at modelling only media of finite size, the determination of the size of the computational domain of a finite element model, which is often overlooked by numerical analysts, is very important in order to ensure both the efficiency of the method and the accuracy of the numerical solution obtained. To highlight this issue, we use the derived analytical solutions to deduce a rigorous mathematical formula for designing the computational domain size of a finite element model. The proposed mathematical formula has indicated that, no matter how fine the mesh or how high the order of elements, the desired accuracy of a finite element solution for pore-fluid flow near a geological lens cannot be achieved unless the size of the finite element model is determined appropriately. Once the finite element computational model has been appropriately designed and validated in a hydrodynamic system, it is used to examine general pore-fluid flow patterns near geological lenses in hydrothermal systems. Some interesting conclusions on the behaviour of geological lenses in hydrodynamic and hydrothermal systems have been reached through the analytical and numerical analyses carried out in this paper.
Resumo:
We use the finite element method to solve reactive mass transport problems in fluid-saturated porous media. In particular, we discuss the mathematical expression of the chemical reaction terms involved in the mass transport equations for an isothermal, non-equilibrium chemical reaction. It has turned out that the Arrhenius law in chemistry is a good mathematical expression for such non-equilibrium chemical reactions especially from the computational point of view. Using the finite element method and the Arrhenius law, we investigate the distributions of PH (i.e. the concentration of H+) and the relevant reactive species in a groundwater system. Although the main focus of this study is on the contaminant transport problems in groundwater systems, the related numerical techniques and principles are equally applicable to the orebody formation problems in the geosciences. Copyright (C) 1999 John Wiley & Sons, Ltd.
Resumo:
CXTANNEAL is a program for analysing contaminant transport in soils. The code, written in Fortran 77, is a modified version of CXTFIT, a commonly used package for estimating solute transport parameters in soils. The improvement of the present code is that it includes simulated annealing as the optimization technique for curve fitting. Tests with hypothetical data show that CXTANNEAL performs better than the original code in searching for optimal parameter estimates. To reduce the computational time, a parallel version of CXTANNEAL (CXTANNEAL_P) was also developed. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
Over recent years databases have become an extremely important resource for biomedical research. Immunology research is increasingly dependent on access to extensive biological databases to extract existing information, plan experiments, and analyse experimental results. This review describes 15 immunological databases that have appeared over the last 30 years. In addition, important issues regarding database design and the potential for misuse of information contained within these databases are discussed. Access pointers are provided for the major immunological databases and also for a number of other immunological resources accessible over the World Wide Web (WWW). (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
A case sensitive intelligent model editor has been developed for constructing consistent lumped dynamic process models and for simplifying them using modelling assumptions. The approach is based on a systematic assumption-driven modelling procedure and on the syntax and semantics of process,models and the simplifying assumptions.
Resumo:
We present a method of estimating HIV incidence rates in epidemic situations from data on age-specific prevalence and changes in the overall prevalence over time. The method is applied to women attending antenatal clinics in Hlabisa, a rural district of KwaZulu/Natal, South Africa, where transmission of HIV is overwhelmingly through heterosexual contact. A model which gives age-specific prevalence rates in the presence of a progressing epidemic is fitted to prevalence data for 1998 using maximum likelihood methods and used to derive the age-specific incidence. Error estimates are obtained using a Monte Carlo procedure. Although the method is quite general some simplifying assumptions are made concerning the form of the risk function and sensitivity analyses are performed to explore the importance of these assumptions. The analysis shows that in 1998 the annual incidence of infection per susceptible woman increased from 5.4 per cent (3.3-8.5 per cent; here and elsewhere ranges give 95 per cent confidence limits) at age 15 years to 24.5 per cent (20.6-29.1 per cent) at age 22 years and declined to 1.3 per cent (0.5-2.9 per cent) at age 50 years; standardized to a uniform age distribution, the overall incidence per susceptible woman aged 15 to 59 was 11.4 per cent (10.0-13.1 per cent); per women in the population it was 8.4 per cent (7.3-9.5 per cent). Standardized to the age distribution of the female population the average incidence per woman was 9.6 per cent (8.4-11.0 per cent); standardized to the age distribution of women attending antenatal clinics, it was 11.3 per cent (9.8-13.3 per cent). The estimated incidence depends on the values used for the epidemic growth rate and the AIDS related mortality. To ensure that, for this population, errors in these two parameters change the age specific estimates of the annual incidence by less than the standard deviation of the estimates of the age specific incidence, the AIDS related mortality should be known to within +/-50 per cent and the epidemic growth rate to within +/-25 per cent, both of which conditions are met. In the absence of cohort studies to measure the incidence of HIV infection directly, useful estimates of the age-specific incidence can be obtained from cross-sectional, age-specific prevalence data and repeat cross-sectional data on the overall prevalence of HIV infection. Several assumptions were made because of the lack of data but sensitivity analyses show that they are unlikely to affect the overall estimates significantly. These estimates are important in assessing the magnitude of the public health problem, for designing vaccine trials and for evaluating the impact of interventions. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Peptides that induce and recall T-cell responses are called T-cell epitopes. T-cell epitopes may be useful in a subunit vaccine against malaria. Computer models that simulate peptide binding to MHC are useful for selecting candidate T-cell epitopes since they minimize the number of experiments required for their identification. We applied a combination of computational and immunological strategies to select candidate T-cell epitopes. A total of 86 experimental binding assays were performed in three rounds of identification of HLA-All binding peptides from the six preerythrocytic malaria antigens. Thirty-six peptides were experimentally confirmed as binders. We show that the cyclical refinement of the ANN models results in a significant improvement of the efficiency of identifying potential T-cell epitopes. (C) 2001 by Elsevier Science Inc.