951 resultados para vector quantization based Gaussian modeling
Resumo:
Protein-protein interactions encode the wiring diagram of cellular signaling pathways and their deregulations underlie a variety of diseases, such as cancer. Inhibiting protein-protein interactions with peptide derivatives is a promising way to develop new biological and therapeutic tools. Here, we develop a general framework to computationally handle hundreds of non-natural amino acid sidechains and predict the effect of inserting them into peptides or proteins. We first generate all structural files (pdb and mol2), as well as parameters and topologies for standard molecular mechanics software (CHARMM and Gromacs). Accurate predictions of rotamer probabilities are provided using a novel combined knowledge and physics based strategy. Non-natural sidechains are useful to increase peptide ligand binding affinity. Our results obtained on non-natural mutants of a BCL9 peptide targeting beta-catenin show very good correlation between predicted and experimental binding free-energies, indicating that such predictions can be used to design new inhibitors. Data generated in this work, as well as PyMOL and UCSF Chimera plug-ins for user-friendly visualization of non-natural sidechains, are all available at http://www.swisssidechain.ch. Our results enable researchers to rapidly and efficiently work with hundreds of non-natural sidechains.
Identification of optimal structural connectivity using functional connectivity and neural modeling.
Resumo:
The complex network dynamics that arise from the interaction of the brain's structural and functional architectures give rise to mental function. Theoretical models demonstrate that the structure-function relation is maximal when the global network dynamics operate at a critical point of state transition. In the present work, we used a dynamic mean-field neural model to fit empirical structural connectivity (SC) and functional connectivity (FC) data acquired in humans and macaques and developed a new iterative-fitting algorithm to optimize the SC matrix based on the FC matrix. A dramatic improvement of the fitting of the matrices was obtained with the addition of a small number of anatomical links, particularly cross-hemispheric connections, and reweighting of existing connections. We suggest that the notion of a critical working point, where the structure-function interplay is maximal, may provide a new way to link behavior and cognition, and a new perspective to understand recovery of function in clinical conditions.
Resumo:
Neuroimaging studies typically compare experimental conditions using average brain responses, thereby overlooking the stimulus-related information conveyed by distributed spatio-temporal patterns of single-trial responses. Here, we take advantage of this rich information at a single-trial level to decode stimulus-related signals in two event-related potential (ERP) studies. Our method models the statistical distribution of the voltage topographies with a Gaussian Mixture Model (GMM), which reduces the dataset to a number of representative voltage topographies. The degree of presence of these topographies across trials at specific latencies is then used to classify experimental conditions. We tested the algorithm using a cross-validation procedure in two independent EEG datasets. In the first ERP study, we classified left- versus right-hemifield checkerboard stimuli for upper and lower visual hemifields. In a second ERP study, when functional differences cannot be assumed, we classified initial versus repeated presentations of visual objects. With minimal a priori information, the GMM model provides neurophysiologically interpretable features - vis à vis voltage topographies - as well as dynamic information about brain function. This method can in principle be applied to any ERP dataset testing the functional relevance of specific time periods for stimulus processing, the predictability of subject's behavior and cognitive states, and the discrimination between healthy and clinical populations.
Resumo:
Stimulation of resident cells by NF-κB activating cytokines is a central element of inflammatory and degenerative disorders of the central nervous system (CNS). This disease-mediated NF-κB activation could be used to drive transgene expression selectively in affected cells, using adeno-associated virus (AAV)-mediated gene transfer. We have constructed a series of AAV vectors expressing GFP under the control of different promoters including NF-κB -responsive elements. As an initial screen, the vectors were tested in vitro in HEK-293T cells treated with TNF-α. The best profile of GFP induction was obtained with a promoter containing two blocks of four NF-κB -responsive sequences from the human JCV neurotropic polyoma virus promoter, fused to a new tight minimal CMV promoter, optimally distant from each other. A therapeutical gene, glial cell line-derived neurotrophic factor (GDNF) cDNA under the control of serotype 1-encapsidated NF-κB -responsive AAV vector (AAV-NF) was protective in senescent cultures of mouse cortical neurons. AAV-NF was then evaluated in vivo in the kainic acid (KA)-induced status epilepticus rat model for temporal lobe epilepsy, a major neurological disorder with a central pathophysiological role for NF-κB activation. We demonstrate that AAV-NF, injected in the hippocampus, responded to disease induction by mediating GFP expression, preferentially in CA1 and CA3 neurons and astrocytes, specifically in regions where inflammatory markers were also induced. Altogether, these data demonstrate the feasibility to use disease-activated transcription factor-responsive elements in order to drive transgene expression specifically in affected cells in inflammatory CNS disorders using AAV-mediated gene transfer.
Resumo:
This paper presents a thermal modeling for power management of a new three-dimensional (3-D) thinned dies stacking process. Besides the high concentration of power dissipating sources, which is the direct consequence of the very interesting integration efficiency increase, this new ultra-compact packaging technology can suffer of the poor thermal conductivity (about 700 times smaller than silicon one) of the benzocyclobutene (BCB) used as both adhesive and planarization layers in each level of the stack. Thermal simulation was conducted using three-dimensional (3-D) FEM tool to analyze the specific behaviors in such stacked structure and to optimize the design rules. This study first describes the heat transfer limitation through the vertical path by examining particularly the case of the high dissipating sources under small area. First results of characterization in transient regime by means of dedicated test device mounted in single level structure are presented. For the design optimization, the thermal draining capabilities of a copper grid or full copper plate embedded in the intermediate layer of stacked structure are evaluated as a function of the technological parameters and the physical properties. It is shown an interest for the transverse heat extraction under the buffer devices dissipating most the power and generally localized in the peripheral zone, and for the temperature uniformization, by heat spreading mechanism, in the localized regions where the attachment of the thin die is altered. Finally, all conclusions of this analysis are used for the quantitative projections of the thermal performance of a first demonstrator based on a three-levels stacking structure for space application.
Resumo:
We present a electroluminescence (EL) study of the Si-rich silicon oxide (SRSO) LEDs with and without Er3+ ions under different polarization schemes: direct current (DC) and pulsed voltage (PV). The power efficiency of the devices and their main optical limitations are presented. We show that under PV polarization scheme, the devices achieve one order of magnitude superior performance in comparison with DC. Time-resolved measurements have shown that this enhancement is met only for active layers in which annealing temperature is high enough (>1000 ◦C) for silicon nanocrystal (Si-nc) formation. Modeling of the system with rate equations has been done and excitation cross-sections for both Si-nc and Er3+ ions have been extracted.
Resumo:
Fluvial deposits are a challenge for modelling flow in sub-surface reservoirs. Connectivity and continuity of permeable bodies have a major impact on fluid flow in porous media. Contemporary object-based and multipoint statistics methods face a problem of robust representation of connected structures. An alternative approach to model petrophysical properties is based on machine learning algorithm ? Support Vector Regression (SVR). Semi-supervised SVR is able to establish spatial connectivity taking into account the prior knowledge on natural similarities. SVR as a learning algorithm is robust to noise and captures dependencies from all available data. Semi-supervised SVR applied to a synthetic fluvial reservoir demonstrated robust results, which are well matched to the flow performance
Resumo:
Macroporosity is often used in the determination of soil compaction. Reduced macroporosity can lead to poor drainage, low root aeration and soil degradation. The aim of this study was to develop and test different models to estimate macro and microporosity efficiently, using multiple regression. Ten soils were selected within a large range of textures: sand (Sa) 0.07-0.84; silt 0.03-0.24; clay 0.13-0.78 kg kg-1 and subjected to three compaction levels (three bulk densities, BD). Two models with similar accuracy were selected, with a mean error of about 0.02 m³ m-3 (2 %). The model y = a + b.BD + c.Sa, named model 2, was selected for its simplicity to estimate Macro (Ma), Micro (Mi) or total porosity (TP): Ma = 0.693 - 0.465 BD + 0.212 Sa; Mi = 0.337 + 0.120 BD - 0.294 Sa; TP = 1.030 - 0.345 BD 0.082 Sa; porosity values were expressed in m³ m-3; BD in kg dm-3; and Sa in kg kg-1. The model was tested with 76 datum set of several other authors. An error of about 0.04 m³ m-3 (4 %) was observed. Simulations of variations in BD as a function of Sa are presented for Ma = 0 and Ma = 0.10 (10 %). The macroporosity equation was remodeled to obtain other compaction indexes: a) to simulate maximum bulk density (MBD) as a function of Sa (Equation 11), in agreement with literature data; b) to simulate relative bulk density (RBD) as a function of BD and Sa (Equation 13); c) another model to simulate RBD as a function of Ma and Sa (Equation 16), confirming the independence of this variable in relation to Sa for a fixed value of macroporosity and, also, proving the hypothesis of Hakansson & Lipiec that RBD = 0.87 corresponds approximately to 10 % macroporosity (Ma = 0.10 m³ m-3).
Resumo:
Currently in Brazil, as in other parts of the world, the concern is great with the increase of degraded agricultural soil, which is mostly related to the occurrence of soil compaction. Although soil texture is recognized as a very important component in the soil compressive behaviors, there are few studies that quantify its influence on the structural changes of Latosols in the Brazilian Cerrado region. This study aimed to evaluate structural changes and the compressive behavior of Latosols in Rio Verde, Goiás, through the modeling of additional soil compaction. The study was carried out using five Latosols with very different textures, under different soil compaction levels. Water retention and soil compression curves, and bearing capacity models were determined from undisturbed samples collected on the B horizons. Results indicated that clayey and very clayey Latosols were more susceptible to compression than medium-textured soils. Soil compression curves at density values associate with edaphic functions were used to determine the beneficial pressure (σ b) , i.e., pressure with optimal water retention, and critical pressure (σcrMAC), i.e., pressure with macroporosity below critical levels. These pressure values were higher than the preconsolidation pressure (σp), and therefore characterized as additional compaction. Based on the compressive behavior of these Latosols, it can be concluded that the combined preconsolidation pressure, beneficial pressure and critical pressure allow a better understanding of compression processes of Latosols.
Resumo:
A method for determining soil hydraulic properties of a weathered tropical soil (Oxisol) using a medium-sized column with undisturbed soil is presented. The method was used to determine fitting parameters of the water retention curve and hydraulic conductivity functions of a soil column in support of a pesticide leaching study. The soil column was extracted from a continuously-used research plot in Central Oahu (Hawaii, USA) and its internal structure was examined by computed tomography. The experiment was based on tension infiltration into the soil column with free outflow at the lower end. Water flow through the soil core was mathematically modeled using a computer code that numerically solves the one-dimensional Richards equation. Measured soil hydraulic parameters were used for direct simulation, and the retention and soil hydraulic parameters were estimated by inverse modeling. The inverse modeling produced very good agreement between model outputs and measured flux and pressure head data for the relatively homogeneous column. The moisture content at a given pressure from the retention curve measured directly in small soil samples was lower than that obtained through parameter optimization based on experiments using a medium-sized undisturbed soil column.
Resumo:
Toxicokinetic modeling is a useful tool to describe or predict the behavior of a chemical agent in the human or animal organism. A general model based on four compartments was developed in a previous study in order to quantify the effect of human variability on a wide range of biological exposure indicators. The aim of this study was to adapt this existing general toxicokinetic model to three organic solvents, which were methyl ethyl ketone, 1-methoxy-2-propanol and 1,1,1,-trichloroethane, and to take into account sex differences. We assessed in a previous human volunteer study the impact of sex on different biomarkers of exposure corresponding to the three organic solvents mentioned above. Results from that study suggested that not only physiological differences between men and women but also differences due to sex hormones levels could influence the toxicokinetics of the solvents. In fact the use of hormonal contraceptive had an effect on the urinary levels of several biomarkers, suggesting that exogenous sex hormones could influence CYP2E1 enzyme activity. These experimental data were used to calibrate the toxicokinetic models developed in this study. Our results showed that it was possible to use an existing general toxicokinetic model for other compounds. In fact, most of the simulation results showed good agreement with the experimental data obtained for the studied solvents, with a percentage of model predictions that lies within the 95% confidence interval varying from 44.4 to 90%. Results pointed out that for same exposure conditions, men and women can show important differences in urinary levels of biological indicators of exposure. Moreover, when running the models by simulating industrial working conditions, these differences could even be more pronounced. In conclusion, a general and simple toxicokinetic model, adapted for three well known organic solvents, allowed us to show that metabolic parameters can have an important impact on the urinary levels of the corresponding biomarkers. These observations give evidence of an interindividual variablity, an aspect that should have its place in the approaches for setting limits of occupational exposure.
Resumo:
With the advancement of high-throughput sequencing and dramatic increase of available genetic data, statistical modeling has become an essential part in the field of molecular evolution. Statistical modeling results in many interesting discoveries in the field, from detection of highly conserved or diverse regions in a genome to phylogenetic inference of species evolutionary history Among different types of genome sequences, protein coding regions are particularly interesting due to their impact on proteins. The building blocks of proteins, i.e. amino acids, are coded by triples of nucleotides, known as codons. Accordingly, studying the evolution of codons leads to fundamental understanding of how proteins function and evolve. The current codon models can be classified into three principal groups: mechanistic codon models, empirical codon models and hybrid ones. The mechanistic models grasp particular attention due to clarity of their underlying biological assumptions and parameters. However, they suffer from simplified assumptions that are required to overcome the burden of computational complexity. The main assumptions applied to the current mechanistic codon models are (a) double and triple substitutions of nucleotides within codons are negligible, (b) there is no mutation variation among nucleotides of a single codon and (c) assuming HKY nucleotide model is sufficient to capture essence of transition- transversion rates at nucleotide level. In this thesis, I develop a framework of mechanistic codon models, named KCM-based model family framework, based on holding or relaxing the mentioned assumptions. Accordingly, eight different models are proposed from eight combinations of holding or relaxing the assumptions from the simplest one that holds all the assumptions to the most general one that relaxes all of them. The models derived from the proposed framework allow me to investigate the biological plausibility of the three simplified assumptions on real data sets as well as finding the best model that is aligned with the underlying characteristics of the data sets. -- Avec l'avancement de séquençage à haut débit et l'augmentation dramatique des données géné¬tiques disponibles, la modélisation statistique est devenue un élément essentiel dans le domaine dé l'évolution moléculaire. Les résultats de la modélisation statistique dans de nombreuses découvertes intéressantes dans le domaine de la détection, de régions hautement conservées ou diverses dans un génome de l'inférence phylogénétique des espèces histoire évolutive. Parmi les différents types de séquences du génome, les régions codantes de protéines sont particulièrement intéressants en raison de leur impact sur les protéines. Les blocs de construction des protéines, à savoir les acides aminés, sont codés par des triplets de nucléotides, appelés codons. Par conséquent, l'étude de l'évolution des codons mène à la compréhension fondamentale de la façon dont les protéines fonctionnent et évoluent. Les modèles de codons actuels peuvent être classés en trois groupes principaux : les modèles de codons mécanistes, les modèles de codons empiriques et les hybrides. Les modèles mécanistes saisir une attention particulière en raison de la clarté de leurs hypothèses et les paramètres biologiques sous-jacents. Cependant, ils souffrent d'hypothèses simplificatrices qui permettent de surmonter le fardeau de la complexité des calculs. Les principales hypothèses retenues pour les modèles actuels de codons mécanistes sont : a) substitutions doubles et triples de nucleotides dans les codons sont négligeables, b) il n'y a pas de variation de la mutation chez les nucléotides d'un codon unique, et c) en supposant modèle nucléotidique HKY est suffisant pour capturer l'essence de taux de transition transversion au niveau nucléotidique. Dans cette thèse, je poursuis deux objectifs principaux. Le premier objectif est de développer un cadre de modèles de codons mécanistes, nommé cadre KCM-based model family, sur la base de la détention ou de l'assouplissement des hypothèses mentionnées. En conséquence, huit modèles différents sont proposés à partir de huit combinaisons de la détention ou l'assouplissement des hypothèses de la plus simple qui détient toutes les hypothèses à la plus générale qui détend tous. Les modèles dérivés du cadre proposé nous permettent d'enquêter sur la plausibilité biologique des trois hypothèses simplificatrices sur des données réelles ainsi que de trouver le meilleur modèle qui est aligné avec les caractéristiques sous-jacentes des jeux de données. Nos expériences montrent que, dans aucun des jeux de données réelles, tenant les trois hypothèses mentionnées est réaliste. Cela signifie en utilisant des modèles simples qui détiennent ces hypothèses peuvent être trompeuses et les résultats de l'estimation inexacte des paramètres. Le deuxième objectif est de développer un modèle mécaniste de codon généralisée qui détend les trois hypothèses simplificatrices, tandis que d'informatique efficace, en utilisant une opération de matrice appelée produit de Kronecker. Nos expériences montrent que sur un jeux de données choisis au hasard, le modèle proposé de codon mécaniste généralisée surpasse autre modèle de codon par rapport à AICc métrique dans environ la moitié des ensembles de données. En outre, je montre à travers plusieurs expériences que le modèle général proposé est biologiquement plausible.
Resumo:
Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments
Resumo:
Soil infiltration is a key link of the natural water cycle process. Studies on soil permeability are conducive for water resources assessment and estimation, runoff regulation and management, soil erosion modeling, nonpoint and point source pollution of farmland, among other aspects. The unequal influence of rainfall duration, rainfall intensity, antecedent soil moisture, vegetation cover, vegetation type, and slope gradient on soil cumulative infiltration was studied under simulated rainfall and different underlying surfaces. We established a six factor-model of soil cumulative infiltration by the improved back propagation (BP)-based artificial neural network algorithm with a momentum term and self-adjusting learning rate. Compared to the multiple nonlinear regression method, the stability and accuracy of the improved BP algorithm was better. Based on the improved BP model, the sensitive index of these six factors on soil cumulative infiltration was investigated. Secondly, the grey relational analysis method was used to individually study grey correlations among these six factors and soil cumulative infiltration. The results of the two methods were very similar. Rainfall duration was the most influential factor, followed by vegetation cover, vegetation type, rainfall intensity and antecedent soil moisture. The effect of slope gradient on soil cumulative infiltration was not significant.
Resumo:
Modeling of water movement in non-saturated soil usually requires a large number of parameters and variables, such as initial soil water content, saturated water content and saturated hydraulic conductivity, which can be assessed relatively easily. Dimensional flow of water in the soil is usually modeled by a nonlinear partial differential equation, known as the Richards equation. Since this equation cannot be solved analytically in certain cases, one way to approach its solution is by numerical algorithms. The success of numerical models in describing the dynamics of water in the soil is closely related to the accuracy with which the water-physical parameters are determined. That has been a big challenge in the use of numerical models because these parameters are generally difficult to determine since they present great spatial variability in the soil. Therefore, it is necessary to develop and use methods that properly incorporate the uncertainties inherent to water displacement in soils. In this paper, a model based on fuzzy logic is used as an alternative to describe water flow in the vadose zone. This fuzzy model was developed to simulate the displacement of water in a non-vegetated crop soil during the period called the emergency phase. The principle of this model consists of a Mamdani fuzzy rule-based system in which the rules are based on the moisture content of adjacent soil layers. The performances of the results modeled by the fuzzy system were evaluated by the evolution of moisture profiles over time as compared to those obtained in the field. The results obtained through use of the fuzzy model provided satisfactory reproduction of soil moisture profiles.