831 resultados para computational complexity
Resumo:
Recognition by the T-cell receptor (TCR) of immunogenic peptides (p) presented by Class I major histocompatibility complexes (MHC) is the key event in the immune response against virus-infected cells or tumor cells. A study of the 2C TCR/SIYR/H-2K(b) system using a computational alanine scanning and a much faster binding free energy decomposition based on the Molecular Mechanics-Generalized Born Surface Area (MM-GBSA) method is presented. The results show that the TCR-p-MHC binding free energy decomposition using this approach and including entropic terms provides a detailed and reliable description of the interactions between the molecules at an atomistic level. Comparison of the decomposition results with experimentally determined activity differences for alanine mutants yields a correlation of 0.67 when the entropy is neglected and 0.72 when the entropy is taken into account. Similarly, comparison of experimental activities with variations in binding free energies determined by computational alanine scanning yields correlations of 0.72 and 0.74 when the entropy is neglected or taken into account, respectively. Some key interactions for the TCR-p-MHC binding are analyzed and some possible side chains replacements are proposed in the context of TCR protein engineering. In addition, a comparison of the two theoretical approaches for estimating the role of each side chain in the complexation is given, and a new ad hoc approach to decompose the vibrational entropy term into atomic contributions, the linear decomposition of the vibrational entropy (LDVE), is introduced. The latter allows the rapid calculation of the entropic contribution of interesting side chains to the binding. This new method is based on the idea that the most important contributions to the vibrational entropy of a molecule originate from residues that contribute most to the vibrational amplitude of the normal modes. The LDVE approach is shown to provide results very similar to those of the exact but highly computationally demanding method.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
BACKGROUND: Accurate catalogs of structural variants (SVs) in mammalian genomes are necessary to elucidate the potential mechanisms that drive SV formation and to assess their functional impact. Next generation sequencing methods for SV detection are an advance on array-based methods, but are almost exclusively limited to four basic types: deletions, insertions, inversions and copy number gains. RESULTS: By visual inspection of 100 Mbp of genome to which next generation sequence data from 17 inbred mouse strains had been aligned, we identify and interpret 21 paired-end mapping patterns, which we validate by PCR. These paired-end mapping patterns reveal a greater diversity and complexity in SVs than previously recognized. In addition, Sanger-based sequence analysis of 4,176 breakpoints at 261 SV sites reveal additional complexity at approximately a quarter of structural variants analyzed. We find micro-deletions and micro-insertions at SV breakpoints, ranging from 1 to 107 bp, and SNPs that extend breakpoint micro-homology and may catalyze SV formation. CONCLUSIONS: An integrative approach using experimental analyses to train computational SV calling is essential for the accurate resolution of the architecture of SVs. We find considerable complexity in SV formation; about a quarter of SVs in the mouse are composed of a complex mixture of deletion, insertion, inversion and copy number gain. Computational methods can be adapted to identify most paired-end mapping patterns.
Resumo:
Clonally complex infections by Mycobacterium tuberculosis are progressively more accepted. Studies of their dimension in epidemiological scenarios where the infective pressure is not high are scarce. Our study systematically searched for clonally complex infections (mixed infections by more than one strain and simultaneous presence of clonal variants) by applying mycobacterial interspersed repetitive-unit (MIRU)-variable-number tandem-repeat (VNTR) analysis to M. tuberculosis isolates from two population-based samples of respiratory (703 cases) and respiratory-extrapulmonary (R+E) tuberculosis (TB) cases (71 cases) in a context of moderate TB incidence. Clonally complex infections were found in 11 (1.6%) of the respiratory TB cases and in 10 (14.1%) of those with R+E TB. Among the 21 cases with clonally complex TB, 9 were infected by 2 independent strains and the remaining 12 showed the simultaneous presence of 2 to 3 clonal variants. For the 10 R+E TB cases with clonally complex infections, compartmentalization (different compositions of strains/clonal variants in independent infected sites) was found in 9 of them. All the strains/clonal variants were also genotyped by IS6110-based restriction fragment length polymorphism analysis, which split two MIRU-defined clonal variants, although in general, it showed a lower discriminatory power to identify the clonal heterogeneity revealed by MIRU-VNTR analysis. The comparative analysis of IS6110 insertion sites between coinfecting clonal variants showed differences in the genes coding for a cutinase, a PPE family protein, and two conserved hypothetical proteins. Diagnostic delay, existence of previous TB, risk for overexposure, and clustered/orphan status of the involved strains were analyzed to propose possible explanations for the cases with clonally complex infections. Our study characterizes in detail all the clonally complex infections by M. tuberculosis found in a systematic survey and contributes to the characterization that these phenomena can be found to an extent higher than expected, even in an unselected population-based sample lacking high infective pressure.
Resumo:
The aim of this study was to propose a methodology allowing a detailed characterization of body sit-to-stand/stand-to-sit postural transition. Parameters characterizing the kinematics of the trunk movement during sit-to-stand (Si-St) postural transition were calculated using one initial sensor system fixed on the trunk and a data logger. Dynamic complexity of these postural transitions was estimated by fractal dimension of acceleration-angular velocity plot. We concluded that this method provides a simple and accurate tool for monitoring frail elderly and to objectively evaluate the efficacy of a rehabilitation program.
Resumo:
Typical human immunodeficiency virus-1 subtype B (HIV-1B) sequences present a GPGR signature at the tip of the variable region 3 (V3) loop; however, unusual motifs harbouring a GWGR signature have also been isolated. Although epidemiological studies have detected this variant in approximately 17-50% of the total infections in Brazil, the prevalence of B"-GWGR in the southernmost region of Brazil is not yet clear. This study aimed to investigate the C2-V3 molecular diversity of the HIV-1B epidemic in southernmost Brazil. HIV-1 seropositive patients were ana-lysed at two distinct time points in the state of Rio Grande do Sul (RS98 and RS08) and at one time point in the state of Santa Catarina (SC08). Phylogenetic analysis classified 46 individuals in the RS98 group as HIV-1B and their molecular signatures were as follows: 26% B"-GWGR, 54% B-GPGR and 20% other motifs. In the RS08 group, HIV-1B was present in 32 samples: 22% B"-GWGR, 59% B-GPGR and 19% other motifs. In the SC08 group, 32 HIV-1B samples were found: 28% B"-GWGR, 59% B-GPGR and 13% other motifs. No association could be established between the HIV-1B V3 signatures and exposure categories in the HIV-1B epidemic in RS. However, B-GPGR seemed to be related to heterosexual individuals in the SC08 group. Our results suggest that the established B"-GWGR epidemics in both cities have similar patterns, which is likely due to their geographical proximity and cultural relationship.
Resumo:
To further understand the pharmacological properties of N-oleoylethanolamine (OEA), a naturally occurring lipid that activates peroxisome proliferator-activated receptor alpha (PPARα), we designed sulfamoyl analogs based on its structure. Among the compounds tested, N-octadecyl-N'-propylsulfamide (CC7) was selected for functional comparison with OEA. The performed studies include the following computational and biological approaches: 1) molecular docking analyses; 2) molecular biology studies with PPARα; 3) pharmacological studies on feeding behavior and visceral analgesia. For the docking studies, we compared OEA and CC7 data with crystallization data obtained with the reference PPARα agonist GW409544. OEA and CC7 interacted with the ligand-binding domain of PPARα in a similar manner to GW409544. Both compounds produced similar transcriptional activation by in vitro assays, including the GST pull-down assay and reporter gene analysis. In addition, CC7 and OEA induced the mRNA expression of CPT1a in HpeG2 cells through PPARα and the induction was avoided with PPARα-specific siRNA. In vivo studies in rats showed that OEA and CC7 had anorectic and antiobesity activity and induced both lipopenia and decreases in hepatic fat content. However, different effects were observed when measuring visceral pain; OEA produced visceral analgesia whereas CC7 showed no effects. These results suggest that OEA activity on the PPARα receptor (e.g., lipid metabolism and feeding behavior) may be dissociated from other actions at alternative targets (e.g., pain) because other non cannabimimetic ligands that interact with PPARα, such as CC7, do not reproduce the full spectrum of the pharmacological activity of OEA. These results provide new opportunities for the development of specific PPARα-activating drugs focused on sulfamide derivatives with a long alkyl chain for the treatment of metabolic dysfunction.
Resumo:
Two hundred twelve patients with colonization/infection due to amoxicillin-clavulanate (AMC)-resistant Escherichia coli were studied. OXA-1- and inhibitor-resistant TEM (IRT)-producing strains were associated with urinary tract infections, while OXA-1 producers and chromosomal AmpC hyperproducers were associated with bacteremic infections. AMC resistance in E. coli is a complex phenomenon with heterogeneous clinical implications.
Resumo:
Reverse transcriptase (RT) is a multifunctional enzyme in the human immunodeficiency virus (HIV)-1 life cycle and represents a primary target for drug discovery efforts against HIV-1 infection. Two classes of RT inhibitors, the nucleoside RT inhibitors (NRTIs) and the nonnucleoside transcriptase inhibitors are prominently used in the highly active antiretroviral therapy in combination with other anti-HIV drugs. However, the rapid emergence of drug-resistant viral strains has limited the successful rate of the anti-HIV agents. Computational methods are a significant part of the drug design process and indispensable to study drug resistance. In this review, recent advances in computer-aided drug design for the rational design of new compounds against HIV-1 RT using methods such as molecular docking, molecular dynamics, free energy calculations, quantitative structure-activity relationships, pharmacophore modelling and absorption, distribution, metabolism, excretion and toxicity prediction are discussed. Successful applications of these methodologies are also highlighted.
Resumo:
One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data