968 resultados para Computational prediction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

MicroRNAs (miRs) are involved in the pathogenesis of several neoplasms; however, there are no data on their expression patterns and possible roles in adrenocortical tumors. Our objective was to study adrenocortical tumors by an integrative bioinformatics analysis involving miR and transcriptomics profiling, pathway analysis, and a novel, tissue-specific miR target prediction approach. Thirty-six tissue samples including normal adrenocortical tissues, benign adenomas, and adrenocortical carcinomas (ACC) were studied by simultaneous miR and mRNA profiling. A novel data-processing software was used to identify all predicted miR-mRNA interactions retrieved from PicTar, TargetScan, and miRBase. Tissue-specific target prediction was achieved by filtering out mRNAs with undetectable expression and searching for mRNA targets with inverse expression alterations as their regulatory miRs. Target sets and significant microarray data were subjected to Ingenuity Pathway Analysis. Six miRs with significantly different expression were found. miR-184 and miR-503 showed significantly higher, whereas miR-511 and miR-214 showed significantly lower expression in ACCs than in other groups. Expression of miR-210 was significantly lower in cortisol-secreting adenomas than in ACCs. By calculating the difference between dCT(miR-511) and dCT(miR-503) (delta cycle threshold), ACCs could be distinguished from benign adenomas with high sensitivity and specificity. Pathway analysis revealed the possible involvement of G2/M checkpoint damage in ACC pathogenesis. To our knowledge, this is the first report describing miR expression patterns and pathway analysis in sporadic adrenocortical tumors. miR biomarkers may be helpful for the diagnosis of adrenocortical malignancy. This tissue-specific target prediction approach may be used in other tumors too.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate prediction of transcription factor binding sites is needed to unravel the function and regulation of genes discovered in genome sequencing projects. To evaluate current computer prediction tools, we have begun a systematic study of the sequence-specific DNA-binding of a transcription factor belonging to the CTF/NFI family. Using a systematic collection of rationally designed oligonucleotides combined with an in vitro DNA binding assay, we found that the sequence specificity of this protein cannot be represented by a simple consensus sequence or weight matrix. For instance, CTF/NFI uses a flexible DNA binding mode that allows for variations of the binding site length. From the experimental data, we derived a novel prediction method using a generalised profile as a binding site predictor. Experimental evaluation of the generalised profile indicated that it accurately predicts the binding affinity of the transcription factor to natural or synthetic DNA sequences. Furthermore, the in vitro measured binding affinities of a subset of oligonucleotides were found to correlate with their transcriptional activities in transfected cells. The combined computational-experimental approach exemplified in this work thus resulted in an accurate prediction method for CTF/NFI binding sites potentially functioning as regulatory regions in vivo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-throughput prioritization of cancer-causing mutations (drivers) is a key challenge of cancer genome projects, due to the number of somatic variants detected in tumors. One important step in this task is to assess the functional impact of tumor somatic mutations. A number of computational methods have been employed for that purpose, although most were originally developed to distinguish disease-related nonsynonymous single nucleotide variants (nsSNVs) from polymorphisms. Our new method, transformed Functional Impact score for Cancer (transFIC), improves the assessment of the functional impact of tumor nsSNVs by taking into account the baseline tolerance of genes to functional variants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acid-sensing ion channels (ASICs) are key receptors for extracellular protons. These neuronal nonvoltage-gated Na(+) channels are involved in learning, the expression of fear, neurodegeneration after ischemia, and pain sensation. We have applied a systematic approach to identify potential pH sensors in ASIC1a and to elucidate the mechanisms by which pH variations govern ASIC gating. We first calculated the pK(a) value of all extracellular His, Glu, and Asp residues using a Poisson-Boltzmann continuum approach, based on the ASIC three-dimensional structure, to identify candidate pH-sensing residues. The role of these residues was then assessed by site-directed mutagenesis and chemical modification, combined with functional analysis. The localization of putative pH-sensing residues suggests that pH changes control ASIC gating by protonation/deprotonation of many residues per subunit in different channel domains. Analysis of the function of residues in the palm domain close to the central vertical axis of the channel allowed for prediction of conformational changes of this region during gating. Our study provides a basis for the intrinsic ASIC pH dependence and describes an approach that can also be applied to the investigation of the mechanisms of the pH dependence of other proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Mechanistic-Empirical Pavement Design Guide (MEPDG) was developed under National Cooperative Highway Research Program (NCHRP) Project 1-37A as a novel mechanistic-empirical procedure for the analysis and design of pavements. The MEPDG was subsequently supported by AASHTO’s DARWin-ME and most recently marketed as AASHTOWare Pavement ME Design software as of February 2013. Although the core design process and computational engine have remained the same over the years, some enhancements to the pavement performance prediction models have been implemented along with other documented changes as the MEPDG transitioned to AASHTOWare Pavement ME Design software. Preliminary studies were carried out to determine possible differences between AASHTOWare Pavement ME Design, MEPDG (version 1.1), and DARWin-ME (version 1.1) performance predictions for new jointed plain concrete pavement (JPCP), new hot mix asphalt (HMA), and HMA over JPCP systems. Differences were indeed observed between the pavement performance predictions produced by these different software versions. Further investigation was needed to verify these differences and to evaluate whether identified local calibration factors from the latest MEPDG (version 1.1) were acceptable for use with the latest version (version 2.1.24) of AASHTOWare Pavement ME Design at the time this research was conducted. Therefore, the primary objective of this research was to examine AASHTOWare Pavement ME Design performance predictions using previously identified MEPDG calibration factors (through InTrans Project 11-401) and, if needed, refine the local calibration coefficients of AASHTOWare Pavement ME Design pavement performance predictions for Iowa pavement systems using linear and nonlinear optimization procedures. A total of 130 representative sections across Iowa consisting of JPCP, new HMA, and HMA over JPCP sections were used. The local calibration results of AASHTOWare Pavement ME Design are presented and compared with national and locally calibrated MEPDG models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gene transfer in eukaryotic cells and organisms suffers from epigenetic effects that result in low or unstable transgene expression and high clonal variability. Use of epigenetic regulators such as matrix attachment regions (MARs) is a promising approach to alleviate such unwanted effects. Dissection of a known MAR allowed the identification of sequence motifs that mediate elevated transgene expression. Bioinformatics analysis implied that these motifs adopt a curved DNA structure that positions nucleosomes and binds specific transcription factors. From these observations, we computed putative MARs from the human genome. Cloning of several predicted MARs indicated that they are much more potent than the previously known element, boosting the expression of recombinant proteins from cultured cells as well as mediating high and sustained expression in mice. Thus we computationally identified potent epigenetic regulators, opening new strategies toward high and stable transgene expression for research, therapeutic production or gene-based therapies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Membrane proteins account for about 20% to 30% of all proteins encoded in a typical genome. They play central roles in multiple cellular processes mediating the interaction of the cell with its surrounding. Over 60% of all drug targets contain a membrane domain. The experimental difficulties of obtaining a crystal structural severely limits our ability or understanding of membrane protein function. Computational evolutionary studies of proteins are crucial for the prediction of 3D structures. In this project, we construct a tool able to quantify the evolutionary positive selective pressure on each residue of membrane proteins through maximum likelihood phylogeny reconstruction. The conservation plot combined with a structural homology model is also a potent tool to predict those residues that have essentials roles in the structure and function of a membrane protein and can be very useful in the design of validation experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the basis on which recruiters form hirability impressions for a job applicant is a key issue in organizational psychology and can be addressed as a social computing problem. We approach the problem from a face-to-face, nonverbal perspective where behavioral feature extraction and inference are automated. This paper presents a computational framework for the automatic prediction of hirability. To this end, we collected an audio-visual dataset of real job interviews where candidates were applying for a marketing job. We automatically extracted audio and visual behavioral cues related to both the applicant and the interviewer. We then evaluated several regression methods for the prediction of hirability scores and showed the feasibility of conducting such a task, with ridge regression explaining 36.2% of the variance. Feature groups were analyzed, and two main groups of behavioral cues were predictive of hirability: applicant audio features and interviewer visual cues, showing the predictive validity of cues related not only to the applicant, but also to the interviewer. As a last step, we analyzed the predictive validity of psychometric questionnaires often used in the personnel selection process, and found that these questionnaires were unable to predict hirability, suggesting that hirability impressions were formed based on the interaction during the interview rather than on questionnaire data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of computational fluid dynamics (CFD) and finite element analysis (FEA) has been growing rapidly in the various fields of science and technology. One of the areas of interest is in biomedical engineering. The altered hemodynamics inside the blood vessels plays a key role in the development of the arterial disease called atherosclerosis, which is the major cause of human death worldwide. Atherosclerosis is often treated with the stenting procedure to restore the normal blood flow. A stent is a tubular, flexible structure, usually made of metals, which is driven and expanded in the blocked arteries. Despite the success rate of the stenting procedure, it is often associated with the restenosis (re-narrowing of the artery) process. The presence of non-biological device in the artery causes inflammation or re-growth of atherosclerotic lesions in the treated vessels. Several factors including the design of stents, type of stent expansion, expansion pressure, morphology and composition of vessel wall influence the restenosis process. Therefore, the role of computational studies is crucial in the investigation and optimisation of the factors that influence post-stenting complications. This thesis focuses on the stent-vessel wall interactions followed by the blood flow in the post-stenting stage of stenosed human coronary artery. Hemodynamic and mechanical stresses were analysed in three separate stent-plaque-artery models. Plaque was modeled as a multi-layer (fibrous cap (FC), necrotic core (NC), and fibrosis (F)) and the arterial wall as a single layer domain. CFD/FEA simulations were performed using commercial software packages in several models mimicking the various stages and morphologies of atherosclerosis. The tissue prolapse (TP) of stented vessel wall, the distribution of von Mises stress (VMS) inside various layers of vessel wall, and the wall shear stress (WSS) along the luminal surface of the deformed vessel wall were measured and evaluated. The results revealed the role of the stenosis size, thickness of each layer of atherosclerotic wall, thickness of stent strut, pressure applied for stenosis expansion, and the flow condition in the distribution of stresses. The thicknesses of FC, and NC and the total thickness of plaque are critical in controlling the stresses inside the tissue. A small change in morphology of artery wall can significantly affect the distribution of stresses. In particular, FC is the most sensitive layer to TP and stresses, which could determine plaque’s vulnerability to rupture. The WSS is highly influenced by the deflection of artery, which in turn is dependent on the structural composition of arterial wall layers. Together with the stenosis size, their roles could play a decisive role in controlling the low values of WSS (<0.5 Pa) prone to restenosis. Moreover, the time dependent flow altered the percentage of luminal area with WSS values less than 0.5 Pa at different time instants. The non- Newtonian viscosity model of the blood properties significantly affects the prediction of WSS magnitude. The outcomes of this investigation will help to better understand the roles of the individual layers of atherosclerotic vessels and their risk to provoke restenosis at the post-stenting stage. As a consequence, the implementation of such an approach to assess the post-stented stresses will assist the engineers and clinicians in optimizing the stenting techniques to minimize the occurrence of restenosis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main objective of this research is to estimate and characterize heterogeneous mass transfer coefficients in bench- and pilot-scale fluidized bed processes by the means of computational fluid dynamics (CFD). A further objective is to benchmark the heterogeneous mass transfer coefficients predicted by fine-grid Eulerian CFD simulations against empirical data presented in the scientific literature. First, a fine-grid two-dimensional Eulerian CFD model with a solid and gas phase has been designed. The model is applied for transient two-dimensional simulations of char combustion in small-scale bubbling and turbulent fluidized beds. The same approach is used to simulate a novel fluidized bed energy conversion process developed for the carbon capture, chemical looping combustion operated with a gaseous fuel. In order to analyze the results of the CFD simulations, two one-dimensional fluidized bed models have been formulated. The single-phase and bubble-emulsion models were applied to derive the average gas-bed and interphase mass transfer coefficients, respectively. In the analysis, the effects of various fluidized bed operation parameters, such as fluidization, velocity, particle and bubble diameter, reactor size, and chemical kinetics, on the heterogeneous mass transfer coefficients in the lower fluidized bed are evaluated extensively. The analysis shows that the fine-grid Eulerian CFD model can predict the heterogeneous mass transfer coefficients quantitatively with acceptable accuracy. Qualitatively, the CFD-based research of fluidized bed process revealed several new scientific results, such as parametrical relationships. The huge variance of seven orders of magnitude within the bed Sherwood numbers presented in the literature could be explained by the change of controlling mechanisms in the overall heterogeneous mass transfer process with the varied process conditions. The research opens new process-specific insights into the reactive fluidized bed processes, such as a strong mass transfer control over heterogeneous reaction rate, a dominance of interphase mass transfer in the fine-particle fluidized beds and a strong chemical kinetic dependence of the average gas-bed mass transfer. The obtained mass transfer coefficients can be applied in fluidized bed models used for various engineering design, reactor scale-up and process research tasks, and they consequently provide an enhanced prediction accuracy of the performance of fluidized bed processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La compréhension de processus biologiques complexes requiert des approches expérimentales et informatiques sophistiquées. Les récents progrès dans le domaine des stratégies génomiques fonctionnelles mettent dorénavant à notre disposition de puissants outils de collecte de données sur l’interconnectivité des gènes, des protéines et des petites molécules, dans le but d’étudier les principes organisationnels de leurs réseaux cellulaires. L’intégration de ces connaissances au sein d’un cadre de référence en biologie systémique permettrait la prédiction de nouvelles fonctions de gènes qui demeurent non caractérisées à ce jour. Afin de réaliser de telles prédictions à l’échelle génomique chez la levure Saccharomyces cerevisiae, nous avons développé une stratégie innovatrice qui combine le criblage interactomique à haut débit des interactions protéines-protéines, la prédiction de la fonction des gènes in silico ainsi que la validation de ces prédictions avec la lipidomique à haut débit. D’abord, nous avons exécuté un dépistage à grande échelle des interactions protéines-protéines à l’aide de la complémentation de fragments protéiques. Cette méthode a permis de déceler des interactions in vivo entre les protéines exprimées par leurs promoteurs naturels. De plus, aucun biais lié aux interactions des membranes n’a pu être mis en évidence avec cette méthode, comparativement aux autres techniques existantes qui décèlent les interactions protéines-protéines. Conséquemment, nous avons découvert plusieurs nouvelles interactions et nous avons augmenté la couverture d’un interactome d’homéostasie lipidique dont la compréhension demeure encore incomplète à ce jour. Par la suite, nous avons appliqué un algorithme d’apprentissage afin d’identifier huit gènes non caractérisés ayant un rôle potentiel dans le métabolisme des lipides. Finalement, nous avons étudié si ces gènes et un groupe de régulateurs transcriptionnels distincts, non préalablement impliqués avec les lipides, avaient un rôle dans l’homéostasie des lipides. Dans ce but, nous avons analysé les lipidomes des délétions mutantes de gènes sélectionnés. Afin d’examiner une grande quantité de souches, nous avons développé une plateforme à haut débit pour le criblage lipidomique à contenu élevé des bibliothèques de levures mutantes. Cette plateforme consiste en la spectrométrie de masse à haute resolution Orbitrap et en un cadre de traitement des données dédié et supportant le phénotypage des lipides de centaines de mutations de Saccharomyces cerevisiae. Les méthodes expérimentales en lipidomiques ont confirmé les prédictions fonctionnelles en démontrant certaines différences au sein des phénotypes métaboliques lipidiques des délétions mutantes ayant une absence des gènes YBR141C et YJR015W, connus pour leur implication dans le métabolisme des lipides. Une altération du phénotype lipidique a également été observé pour une délétion mutante du facteur de transcription KAR4 qui n’avait pas été auparavant lié au métabolisme lipidique. Tous ces résultats démontrent qu’un processus qui intègre l’acquisition de nouvelles interactions moléculaires, la prédiction informatique des fonctions des gènes et une plateforme lipidomique innovatrice à haut débit , constitue un ajout important aux méthodologies existantes en biologie systémique. Les développements en méthodologies génomiques fonctionnelles et en technologies lipidomiques fournissent donc de nouveaux moyens pour étudier les réseaux biologiques des eucaryotes supérieurs, incluant les mammifères. Par conséquent, le stratégie présenté ici détient un potentiel d’application au sein d’organismes plus complexes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software systems are progressively being deployed in many facets of human life. The implication of the failure of such systems, has an assorted impact on its customers. The fundamental aspect that supports a software system, is focus on quality. Reliability describes the ability of the system to function under specified environment for a specified period of time and is used to objectively measure the quality. Evaluation of reliability of a computing system involves computation of hardware and software reliability. Most of the earlier works were given focus on software reliability with no consideration for hardware parts or vice versa. However, a complete estimation of reliability of a computing system requires these two elements to be considered together, and thus demands a combined approach. The present work focuses on this and presents a model for evaluating the reliability of a computing system. The method involves identifying the failure data for hardware components, software components and building a model based on it, to predict the reliability. To develop such a model, focus is given to the systems based on Open Source Software, since there is an increasing trend towards its use and only a few studies were reported on the modeling and measurement of the reliability of such products. The present work includes a thorough study on the role of Free and Open Source Software, evaluation of reliability growth models, and is trying to present an integrated model for the prediction of reliability of a computational system. The developed model has been compared with existing models and its usefulness of is being discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Post-transcriptional gene silencing by RNA interference is mediated by small interfering RNA called siRNA. This gene silencing mechanism can be exploited therapeutically to a wide variety of disease-associated targets, especially in AIDS, neurodegenerative diseases, cholesterol and cancer on mice with the hope of extending these approaches to treat humans. Over the recent past, a significant amount of work has been undertaken to understand the gene silencing mediated by exogenous siRNA. The design of efficient exogenous siRNA sequences is challenging because of many issues related to siRNA. While designing efficient siRNA, target mRNAs must be selected such that their corresponding siRNAs are likely to be efficient against that target and unlikely to accidentally silence other transcripts due to sequence similarity. So before doing gene silencing by siRNAs, it is essential to analyze their off-target effects in addition to their inhibition efficiency against a particular target. Hence designing exogenous siRNA with good knock-down efficiency and target specificity is an area of concern to be addressed. Some methods have been developed already by considering both inhibition efficiency and off-target possibility of siRNA against agene. Out of these methods, only a few have achieved good inhibition efficiency, specificity and sensitivity. The main focus of this thesis is to develop computational methods to optimize the efficiency of siRNA in terms of “inhibition capacity and off-target possibility” against target mRNAs with improved efficacy, which may be useful in the area of gene silencing and drug design for tumor development. This study aims to investigate the currently available siRNA prediction approaches and to devise a better computational approach to tackle the problem of siRNA efficacy by inhibition capacity and off-target possibility. The strength and limitations of the available approaches are investigated and taken into consideration for making improved solution. Thus the approaches proposed in this study extend some of the good scoring previous state of the art techniques by incorporating machine learning and statistical approaches and thermodynamic features like whole stacking energy to improve the prediction accuracy, inhibition efficiency, sensitivity and specificity. Here, we propose one Support Vector Machine (SVM) model, and two Artificial Neural Network (ANN) models for siRNA efficiency prediction. In SVM model, the classification property is used to classify whether the siRNA is efficient or inefficient in silencing a target gene. The first ANNmodel, named siRNA Designer, is used for optimizing the inhibition efficiency of siRNA against target genes. The second ANN model, named Optimized siRNA Designer, OpsiD, produces efficient siRNAs with high inhibition efficiency to degrade target genes with improved sensitivity-specificity, and identifies the off-target knockdown possibility of siRNA against non-target genes. The models are trained and tested against a large data set of siRNA sequences. The validations are conducted using Pearson Correlation Coefficient, Mathews Correlation Coefficient, Receiver Operating Characteristic analysis, Accuracy of prediction, Sensitivity and Specificity. It is found that the approach, OpsiD, is capable of predicting the inhibition capacity of siRNA against a target mRNA with improved results over the state of the art techniques. Also we are able to understand the influence of whole stacking energy on efficiency of siRNA. The model is further improved by including the ability to identify the “off-target possibility” of predicted siRNA on non-target genes. Thus the proposed model, OpsiD, can predict optimized siRNA by considering both “inhibition efficiency on target genes and off-target possibility on non-target genes”, with improved inhibition efficiency, specificity and sensitivity. Since we have taken efforts to optimize the siRNA efficacy in terms of “inhibition efficiency and offtarget possibility”, we hope that the risk of “off-target effect” while doing gene silencing in various bioinformatics fields can be overcome to a great extent. These findings may provide new insights into cancer diagnosis, prognosis and therapy by gene silencing. The approach may be found useful for designing exogenous siRNA for therapeutic applications and gene silencing techniques in different areas of bioinformatics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An emerging consensus in cognitive science views the biological brain as a hierarchically-organized predictive processing system. This is a system in which higher-order regions are continuously attempting to predict the activity of lower-order regions at a variety of (increasingly abstract) spatial and temporal scales. The brain is thus revealed as a hierarchical prediction machine that is constantly engaged in the effort to predict the flow of information originating from the sensory surfaces. Such a view seems to afford a great deal of explanatory leverage when it comes to a broad swathe of seemingly disparate psychological phenomena (e.g., learning, memory, perception, action, emotion, planning, reason, imagination, and conscious experience). In the most positive case, the predictive processing story seems to provide our first glimpse at what a unified (computationally-tractable and neurobiological plausible) account of human psychology might look like. This obviously marks out one reason why such models should be the focus of current empirical and theoretical attention. Another reason, however, is rooted in the potential of such models to advance the current state-of-the-art in machine intelligence and machine learning. Interestingly, the vision of the brain as a hierarchical prediction machine is one that establishes contact with work that goes under the heading of 'deep learning'. Deep learning systems thus often attempt to make use of predictive processing schemes and (increasingly abstract) generative models as a means of supporting the analysis of large data sets. But are such computational systems sufficient (by themselves) to provide a route to general human-level analytic capabilities? I will argue that they are not and that closer attention to a broader range of forces and factors (many of which are not confined to the neural realm) may be required to understand what it is that gives human cognition its distinctive (and largely unique) flavour. The vision that emerges is one of 'homomimetic deep learning systems', systems that situate a hierarchically-organized predictive processing core within a larger nexus of developmental, behavioural, symbolic, technological and social influences. Relative to that vision, I suggest that we should see the Web as a form of 'cognitive ecology', one that is as much involved with the transformation of machine intelligence as it is with the progressive reshaping of our own cognitive capabilities.