112 resultados para Fluid Dynamics -- Computer simulation
Resumo:
Recent technological advances in remote sensing have enabled investigation of the morphodynamics and hydrodynamics of large rivers. However, measuring topography and flow in these very large rivers is time consuming and thus often constrains the spatial resolution and reach-length scales that can be monitored. Similar constraints exist for computational fluid dynamics (CFD) studies of large rivers, requiring maximization of mesh-or grid-cell dimensions and implying a reduction in the representation of bedform-roughness elements that are of the order of a model grid cell or less, even if they are represented in available topographic data. These ``subgrid'' elements must be parameterized, and this paper applies and considers the impact of roughness-length treatments that include the effect of bed roughness due to ``unmeasured'' topography. CFD predictions were found to be sensitive to the roughness-length specification. Model optimization was based on acoustic Doppler current profiler measurements and estimates of the water surface slope for a variety of roughness lengths. This proved difficult as the metrics used to assess optimal model performance diverged due to the effects of large bedforms that are not well parameterized in roughness-length treatments. However, the general spatial flow patterns are effectively predicted by the model. Changes in roughness length were shown to have a major impact upon flow routing at the channel scale. The results also indicate an absence of secondary flow circulation cells in the reached studied, and suggest simpler two-dimensional models may have great utility in the investigation of flow within large rivers. Citation: Sandbach, S. D. et al. (2012), Application of a roughness-length representation to parameterize energy loss in 3-D numerical simulations of large rivers, Water Resour. Res., 48, W12501, doi: 10.1029/2011WR011284.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Recognition by the T-cell receptor (TCR) of immunogenic peptides (p) presented by Class I major histocompatibility complexes (MHC) is the key event in the immune response against virus-infected cells or tumor cells. A study of the 2C TCR/SIYR/H-2K(b) system using a computational alanine scanning and a much faster binding free energy decomposition based on the Molecular Mechanics-Generalized Born Surface Area (MM-GBSA) method is presented. The results show that the TCR-p-MHC binding free energy decomposition using this approach and including entropic terms provides a detailed and reliable description of the interactions between the molecules at an atomistic level. Comparison of the decomposition results with experimentally determined activity differences for alanine mutants yields a correlation of 0.67 when the entropy is neglected and 0.72 when the entropy is taken into account. Similarly, comparison of experimental activities with variations in binding free energies determined by computational alanine scanning yields correlations of 0.72 and 0.74 when the entropy is neglected or taken into account, respectively. Some key interactions for the TCR-p-MHC binding are analyzed and some possible side chains replacements are proposed in the context of TCR protein engineering. In addition, a comparison of the two theoretical approaches for estimating the role of each side chain in the complexation is given, and a new ad hoc approach to decompose the vibrational entropy term into atomic contributions, the linear decomposition of the vibrational entropy (LDVE), is introduced. The latter allows the rapid calculation of the entropic contribution of interesting side chains to the binding. This new method is based on the idea that the most important contributions to the vibrational entropy of a molecule originate from residues that contribute most to the vibrational amplitude of the normal modes. The LDVE approach is shown to provide results very similar to those of the exact but highly computationally demanding method.
Resumo:
To study the interaction of T cell receptor with its ligand, a complex of a major histocompatibility complex molecule and a peptide, we derived H-2Kd-restricted cytolytic T lymphocyte clones from mice immunized with a Plasmodium berghei circumsporozoite peptide (PbCS) 252-260 (SYIPSAEKI) derivative containing photoreactive Nepsilon-[4-azidobenzoyl] lysine in place of Pro-255. This residue and Lys-259 were essential parts of the epitope recognized by these clones. Most of the clones expressed BV1S1A1 encoded beta chains along with specific complementary determining region (CDR) 3beta regions but diverse alpha chain sequences. Surprisingly, all T cell receptors were preferentially photoaffinity labeled on the alpha chain. For a representative T cell receptor, the photoaffinity labeled site was located in the Valpha C-strand. Computer modeling suggested the presence of a hydrophobic pocket, which is formed by parts of the Valpha/Jalpha C-, F-, and G-strands and adjacent CDR3alpha residues and structured to be able to avidly bind the photoreactive ligand side chain. We previously found that a T cell receptor specific for a PbCS peptide derivative containing this photoreactive side chain in position 259 similarly used a hydrophobic pocket located between the junctional CDR3 loops. We propose that this nonpolar domain in these locations allow T cell receptors to avidly and specifically bind epitopes containing non-peptidic side chains.
Resumo:
Recent progress in the experimental determination of protein structures allow to understand, at a very detailed level, the molecular recognition mechanisms that are at the basis of the living matter. This level of understanding makes it possible to design rational therapeutic approaches, in which effectors molecules are adapted or created de novo to perform a given function. An example of such an approach is drug design, were small inhibitory molecules are designed using in silico simulations and tested in vitro. In this article, we present a similar approach to rationally optimize the sequence of killer T lymphocytes receptors to make them more efficient against melanoma cells. The architecture of this translational research project is presented together with its implications both at the level of basic research as well as in the clinics.
Resumo:
Hidden Markov models (HMMs) are probabilistic models that are well adapted to many tasks in bioinformatics, for example, for predicting the occurrence of specific motifs in biological sequences. MAMOT is a command-line program for Unix-like operating systems, including MacOS X, that we developed to allow scientists to apply HMMs more easily in their research. One can define the architecture and initial parameters of the model in a text file and then use MAMOT for parameter optimization on example data, decoding (like predicting motif occurrence in sequences) and the production of stochastic sequences generated according to the probabilistic model. Two examples for which models are provided are coiled-coil domains in protein sequences and protein binding sites in DNA. A wealth of useful features include the use of pseudocounts, state tying and fixing of selected parameters in learning, and the inclusion of prior probabilities in decoding. AVAILABILITY: MAMOT is implemented in C++, and is distributed under the GNU General Public Licence (GPL). The software, documentation, and example model files can be found at http://bcf.isb-sib.ch/mamot
Resumo:
Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Q(st)-F(st)) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2F(st)/(1 - F(st))G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2F(st)/(1 - F(st))] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Q(st)-F(st) comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions.
Resumo:
Using numerical simulations we investigate shapes of random equilateral open and closed chains, one of the simplest models of freely fluctuating polymers in a solution. We are interested in the 3D density distribution of the modeled polymers where the polymers have been aligned with respect to their three principal axes of inertia. This type of approach was pioneered by Theodorou and Suter in 1985. While individual configurations of the modeled polymers are almost always nonsymmetric, the approach of Theodorou and Suter results in cumulative shapes that are highly symmetric. By taking advantage of asymmetries within the individual configurations, we modify the procedure of aligning independent configurations in a way that shows their asymmetry. This approach reveals, for example, that the 3D density distribution for linear polymers has a bean shape predicted theoretically by Kuhn. The symmetry-breaking approach reveals complementary information to the traditional, symmetrical, 3D density distributions originally introduced by Theodorou and Suter.
Resumo:
The tendency for more closely related species to share similar traits and ecological strategies can be explained by their longer shared evolutionary histories and represents phylogenetic conservatism. How strongly species traits co-vary with phylogeny can significantly impact how we analyze cross-species data and can influence our interpretation of assembly rules in the rapidly expanding field of community phylogenetics. Phylogenetic conservatism is typically quantified by analyzing the distribution of species values on the phylogenetic tree that connects them. Many phylogenetic approaches, however, assume a completely sampled phylogeny: while we have good estimates of deeper phylogenetic relationships for many species-rich groups, such as birds and flowering plants, we often lack information on more recent interspecific relationships (i.e., within a genus). A common solution has been to represent these relationships as polytomies on trees using taxonomy as a guide. Here we show that such trees can dramatically inflate estimates of phylogenetic conservatism quantified using S. P. Blomberg et al.'s K statistic. Using simulations, we show that even randomly generated traits can appear to be phylogenetically conserved on poorly resolved trees. We provide a simple rarefaction-based solution that can reliably retrieve unbiased estimates of K, and we illustrate our method using data on first flowering times from Thoreau's woods (Concord, Massachusetts, USA).
Resumo:
The purpose of this study was to test the hypothesis that athletes having a slower oxygen uptake ( VO(2)) kinetics would benefit more, in terms of time spent near VO(2max), from an increase in the intensity of an intermittent running training (IT). After determination of VO(2max), vVO(2max) (i.e. the minimal velocity associated with VO(2max) in an incremental test) and the time to exhaustion sustained at vVO(2max) ( T(lim)), seven well-trained triathletes performed in random order two IT sessions. The two IT comprised 30-s work intervals at either 100% (IT(100%)) or 105% (IT(105%)) of vVO(2max) with 30-s recovery intervals at 50% of vVO(2max) between each repeat. The parameters of the VO(2) kinetics (td(1), tau(1), A(1), td(2), tau(2), A(2), i.e. time delay, time constant and amplitude of the primary phase and slow component, respectively) during the T(lim) test were modelled with two exponential functions. The highest VO(2) reached was significantly lower ( P<0.01) in IT(100%) run at 19.8 (0.9) km(.)h(-1) [66.2 (4.6) ml(.)min(-1.)kg(-1)] than in IT(105%) run at 20.8 (1.0) km(.)h(-1) [71.1 (4.9) ml(.)min(-1.)kg(-1)] or in the incremental test [71.2 (4.2) ml(.)min(-1.)kg(-1)]. The time sustained above 90% of VO(2max) in IT(105%) [338 (149) s] was significantly higher ( P<0.05) than in IT(100%) [168 (131) s]. The average T(lim) was 244 (39) s, tau(1) was 15.8 (5.9) s and td(2) was 96 (13) s. tau(1) was correlated with the difference in time spent above 90% of VO(2max) ( r=0.91; P<0.01) between IT(105%) and IT(100%). In conclusion, athletes with a slower VO(2) kinetics in a vVO(2max) constant-velocity test benefited more from the 5% rise of IT work intensity, exercising for longer above 90% of VO(2max) when the IT intensity was increased from 100 to 105% of vVO(2max).
Resumo:
The shortest tube of constant diameter that can form a given knot represents the 'ideal' form of the knot. Ideal knots provide an irreducible representation of the knot, and they have some intriguing mathematical and physical features, including a direct correspondence with the time-averaged shapes of knotted DNA molecules in solution. Here we describe the properties of ideal forms of composite knots-knots obtained by the sequential tying of two or more independent knots (called factor knots) on the same string. We find that the writhe (related to the handedness of crossing points) of composite knots is the sum of that of the ideal forms of the factor knots. By comparing ideal composite knots with simulated configurations of knotted, thermally fluctuating DNA, we conclude that the additivity of writhe applies also to randomly distorted configurations of composite knots and their corresponding factor knots. We show that composite knots with several factor knots may possess distinct structural isomers that can be interconverted only by loosening the knot.
Resumo:
L'objet de ce cahier est de décrire la méthode de construction d'un système de "Case MiX" qui, en se fondant sur les DRG, ne décrit plus seulement la clientèle hospitalière en fonction des diagnostics principaux mais aussi des comorbidités ou complications recensées et des interventions chirurgicales subies.
Resumo:
The origin of new genes through gene duplication is fundamental to the evolution of lineage- or species-specific phenotypic traits. In this report, we estimate the number of functional retrogenes on the lineage leading to humans generated by the high rate of retroposition (retroduplication) in primates. Extensive comparative sequencing and expression studies coupled with evolutionary analyses and simulations suggest that a significant proportion of recent retrocopies represent bona fide human genes. We estimate that at least one new retrogene per million years emerged on the human lineage during the past approximately 63 million years of primate evolution. Detailed analysis of a subset of the data shows that the majority of retrogenes are specifically expressed in testis, whereas their parental genes show broad expression patterns. Consistently, most retrogenes evolved functional roles in spermatogenesis. Proteins encoded by X chromosome-derived retrogenes were strongly preserved by purifying selection following the duplication event, supporting the view that they may act as functional autosomal substitutes during X-inactivation of late spermatogenesis genes. Also, some retrogenes acquired a new or more adapted function driven by positive selection. We conclude that retroduplication significantly contributed to the formation of recent human genes and that most new retrogenes were progressively recruited during primate evolution by natural and/or sexual selection to enhance male germline function.
Resumo:
Exposure to solar ultraviolet (UV) radiation is the main causative factor for skin cancer. UV exposure depends on environmental and individual factors, but individual exposure data remain scarce. While ground UV irradiance is monitored via different techniques, it is difficult to translate such observations into human UV exposure or dose because of confounding factors. A multi-disciplinary collaboration developed a model predicting the dose and distribution of UV exposure on the basis of ground irradiation and morphological data. Standard 3D computer graphics techniques were adapted to develop a simulation tool that estimates solar exposure of a virtual manikin depicted as a triangle mesh surface. The amount of solar energy received by various body locations is computed for direct, diffuse and reflected radiation separately. Dosimetric measurements obtained in field conditions were used to assess the model performance. The model predicted exposure to solar UV adequately with a symmetric mean absolute percentage error of 13% and half of the predictions within 17% range of the measurements. Using this tool, solar UV exposure patterns were investigated with respect to the relative contribution of the direct, diffuse and reflected radiation. Exposure doses for various body parts and exposure scenarios of a standing individual were assessed using erythemally-weighted UV ground irradiance data measured in 2009 at Payerne, Switzerland as input. For most anatomical sites, mean daily doses were high (typically 6.2-14.6 Standard Erythemal Dose, SED) and exceeded recommended exposure values. Direct exposure was important during specific periods (e. g. midday during summer), but contributed moderately to the annual dose, ranging from 15 to 24% for vertical and horizontal body parts, respectively. Diffuse irradiation explained about 80% of the cumulative annual exposure dose.