79 resultados para Computational prediction

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background PPP1R6 is a protein phosphatase 1 glycogen-targeting subunit (PP1-GTS) abundant in skeletal muscle with an undefined metabolic control role. Here PPP1R6 effects on myotube glycogen metabolism, particle size and subcellular distribution are examined and compared with PPP1R3C/PTG and PPP1R3A/GM. Results PPP1R6 overexpression activates glycogen synthase (GS), reduces its phosphorylation at Ser-641/0 and increases the extracted and cytochemically-stained glycogen content, less than PTG but more than GM. PPP1R6 does not change glycogen phosphorylase activity. All tested PP1-GTS-cells have more glycogen particles than controls as found by electron microscopy of myotube sections. Glycogen particle size is distributed for all cell-types in a continuous range, but PPP1R6 forms smaller particles (mean diameter 14.4 nm) than PTG (36.9 nm) and GM (28.3 nm) or those in control cells (29.2 nm). Both PPP1R6- and GM-derived glycogen particles are in cytosol associated with cellular structures; PTG-derived glycogen is found in membrane- and organelle-devoid cytosolic glycogen-rich areas; and glycogen particles are dispersed in the cytosol in control cells. A tagged PPP1R6 protein at the C-terminus with EGFP shows a diffuse cytosol pattern in glucose-replete and -depleted cells and a punctuate pattern surrounding the nucleus in glucose-depleted cells, which colocates with RFP tagged with the Golgi targeting domain of β-1,4-galactosyltransferase, according to a computational prediction for PPP1R6 Golgi location. Conclusions PPP1R6 exerts a powerful glycogenic effect in cultured muscle cells, more than GM and less than PTG. PPP1R6 protein translocates from a Golgi to cytosolic location in response to glucose. The molecular size and subcellular location of myotube glycogen particles is determined by the PPP1R6, PTG and GM scaffolding.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the first useful products from the human genome will be a set of predicted genes. Besides its intrinsic scientific interest, the accuracy and completeness of this data set is of considerable importance for human health and medicine. Though progress has been made on computational gene identification in terms of both methods and accuracy evaluation measures, most of the sequence sets in which the programs are tested are short genomic sequences, and there is concern that these accuracy measures may not extrapolate well to larger, more challenging data sets. Given the absence of experimentally verified large genomic data sets, we constructed a semiartificial test set comprising a number of short single-gene genomic sequences with randomly generated intergenic regions. This test set, which should still present an easier problem than real human genomic sequence, mimics the approximately 200kb long BACs being sequenced. In our experiments with these longer genomic sequences, the accuracy of GENSCAN, one of the most accurate ab initio gene prediction programs, dropped significantly, although its sensitivity remained high. Conversely, the accuracy of similarity-based programs, such as GENEWISE, PROCRUSTES, and BLASTX was not affected significantly by the presence of random intergenic sequence, but depended on the strength of the similarity to the protein homolog. As expected, the accuracy dropped if the models were built using more distant homologs, and we were able to quantitatively estimate this decline. However, the specificities of these techniques are still rather good even when the similarity is weak, which is a desirable characteristic for driving expensive follow-up experiments. Our experiments suggest that though gene prediction will improve with every new protein that is discovered and through improvements in the current set of tools, we still have a long way to go before we can decipher the precise exonic structure of every gene in the human genome using purely computational methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: A number of studies have used protein interaction data alone for protein function prediction. Here, we introduce a computational approach for annotation of enzymes, based on the observation that similar protein sequences are more likely to perform the same function if they share similar interacting partners. Results: The method has been tested against the PSI-BLAST program using a set of 3,890 protein sequences from which interaction data was available. For protein sequences that align with at least 40% sequence identity to a known enzyme, the specificity of our method in predicting the first three EC digits increased from 80% to 90% at 80% coverage when compared to PSI-BLAST. Conclusion: Our method can also be used in proteins for which homologous sequences with known interacting partners can be detected. Thus, our method could increase 10% the specificity of genome-wide enzyme predictions based on sequence matching by PSI-BLAST alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The importance of hemodynamics in the etiopathogenesis of intracranial aneurysms (IAs) is widely accepted.Computational fluid dynamics (CFD) is being used increasingly for hemodynamic predictions. However, alogn with thecontinuing development and validation of these tools, it is imperative to collect the opinion of the clinicians. Methods: A workshopon CFD was conducted during the European Society of Minimally Invasive Neurological Therapy (ESMINT) Teaching Course,Lisbon, Portugal. 36 delegates, mostly clinicians, performed supervised CFD analysis for an IA, using the @neuFuse softwaredeveloped within the European project @neurIST. Feedback on the workshop was collected and analyzed. The performancewas assessed on a scale of 1 to 4 and, compared with experts’ performance. Results: Current dilemmas in the management ofunruptured IAs remained the most important motivating factor to attend the workshop and majority of participants showedinterest in participating in a multicentric trial. The participants achieved an average score of 2.52 (range 0–4) which was 63% (range 0–100%) of an expert user. Conclusions: Although participants showed a manifest interest in CFD, there was a clear lack ofawareness concerning the role of hemodynamics in the etiopathogenesis of IAs and the use of CFD in this context. More effortstherefore are required to enhance understanding of the clinicians in the subject.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High-throughput prioritization of cancer-causing mutations (drivers) is a key challenge of cancer genome projects, due to the number of somatic variants detected in tumors. One important step in this task is to assess the functional impact of tumor somatic mutations. A number of computational methods have been employed for that purpose, although most were originally developed to distinguish disease-related nonsynonymous single nucleotide variants (nsSNVs) from polymorphisms. Our new method, transformed Functional Impact score for Cancer (transFIC), improves the assessment of the functional impact of tumor nsSNVs by taking into account the baseline tolerance of genes to functional variants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Membrane proteins account for about 20% to 30% of all proteins encoded in a typical genome. They play central roles in multiple cellular processes mediating the interaction of the cell with its surrounding. Over 60% of all drug targets contain a membrane domain. The experimental difficulties of obtaining a crystal structural severely limits our ability or understanding of membrane protein function. Computational evolutionary studies of proteins are crucial for the prediction of 3D structures. In this project, we construct a tool able to quantify the evolutionary positive selective pressure on each residue of membrane proteins through maximum likelihood phylogeny reconstruction. The conservation plot combined with a structural homology model is also a potent tool to predict those residues that have essentials roles in the structure and function of a membrane protein and can be very useful in the design of validation experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the huge increase in processor and interprocessor network performace, many computational problems remain unsolved due to lack of some critical resources such as floating point sustained performance, memory bandwidth, etc... Examples of these problems are found in areas of climate research, biology, astrophysics, high energy physics (montecarlo simulations) and artificial intelligence, among others. For some of these problems, computing resources of a single supercomputing facility can be 1 or 2 orders of magnitude apart from the resources needed to solve some them. Supercomputer centers have to face an increasing demand on processing performance, with the direct consequence of an increasing number of processors and systems, resulting in a more difficult administration of HPC resources and the need for more physical space, higher electrical power consumption and improved air conditioning, among other problems. Some of the previous problems can´t be easily solved, so grid computing, intended as a technology enabling the addition and consolidation of computing power, can help in solving large scale supercomputing problems. In this document, we describe how 2 supercomputing facilities in Spain joined their resources to solve a problem of this kind. The objectives of this experience were, among others, to demonstrate that such a cooperation can enable the solution of bigger dimension problems and to measure the efficiency that could be achieved. In this document we show some preliminary results of this experience and to what extend these objectives were achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Here we describe the results of some computational explorations in Thompson's group F. We describe experiments to estimate the cogrowth of F with respect to its standard finite generating set, designed to address the subtle and difficult question whether or not Thompson's group is amenable. We also describe experiments to estimate the exponential growth rate of F and the rate of escape of symmetric random walks with respect to the standard generating set.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inductive learning aims at finding general rules that hold true in a database. Targeted learning seeks rules for the predictions of the value of a variable based on the values of others, as in the case of linear or non-parametric regression analysis. Non-targeted learning finds regularities without a specific prediction goal. We model the product of non-targeted learning as rules that state that a certain phenomenon never happens, or that certain conditions necessitate another. For all types of rules, there is a trade-off between the rule's accuracy and its simplicity. Thus rule selection can be viewed as a choice problem, among pairs of degree of accuracy and degree of complexity. However, one cannot in general tell what is the feasible set in the accuracy-complexity space. Formally, we show that finding out whether a point belongs to this set is computationally hard. In particular, in the context of linear regression, finding a small set of variables that obtain a certain value of R2 is computationally hard. Computational complexity may explain why a person is not always aware of rules that, if asked, she would find valid. This, in turn, may explain why one can change other people's minds (opinions, beliefs) without providing new information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Forest fires are a serious threat to humans and nature from an ecological, social and economic point of view. Predicting their behaviour by simulation still delivers unreliable results and remains a challenging task. Latest approaches try to calibrate input variables, often tainted with imprecision, using optimisation techniques like Genetic Algorithms. To converge faster towards fitter solutions, the GA is guided with knowledge obtained from historical or synthetical fires. We developed a robust and efficient knowledge storage and retrieval method. Nearest neighbour search is applied to find the fire configuration from knowledge base most similar to the current configuration. Therefore, a distance measure was elaborated and implemented in several ways. Experiments show the performance of the different implementations regarding occupied storage and retrieval time with overly satisfactory results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

"Vegeu el resum a l'inici del document del fitxer adjunt."

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pensions together with savings and investments during active life are key elements of retirement planning. Motivation for personal choices about the standard of living, bequest and the replacement ratio of pension with respect to last salary income must be considered. This research contributes to the financial planning by helping to quantify long-term care economic needs. We estimate life expectancy from retirement age onwards. The economic cost of care per unit of service is linked to the expected time of needed care and the intensity of required services. The expected individual cost of long-term care from an onset of dependence is estimated separately for men and women. Assumptions on the mortality of the dependent people compared to the general population are introduced. Parameters defining eligibility for various forms of coverage by the universal public social care of the welfare system are addressed. The impact of the intensity of social services on individual predictions is assessed, and a partial coverage by standard private insurance products is also explored. Data were collected by the Spanish Institute of Statistics in two surveys conducted on the general Spanish population in 1999 and in 2008. Official mortality records and life table trends were used to create realistic scenarios for longevity. We find empirical evidence that the public long-term care system in Spain effectively mitigates the risk of incurring huge lifetime costs. We also find that the most vulnerable categories are citizens with moderate disabilities that do not qualify to obtain public social care support. In the Spanish case, the trends between 1999 and 2008 need to be further explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This PhD project aims to study paraphrasing, initially understood as the different ways in which the same content is expressed linguistically. We will go into that concept in depth trying to define and delimit its scope more accurately. In that sense, we also aim to discover which kind of structures and phenomena it covers. Although there exist some paraphrasing typologies, the great majority of them only apply to English, and focus on lexical and syntactic transformations. Our intention is to go further into this subject and propose a paraphrasing typology for Spanish and Catalan combining lexical, syntactic, semantic and pragmatic knowledge. We apply a bottom-up methodology trying to collect evidence of this phenomenon from the data. For this purpose, we are initially using the Spanish Wikipedia as our corpus. The internal structure of this encyclopedia makes it a good resource for extracting paraphrasing examples for our investigation. This empirical approach will be complemented with the use of linguistic knowledge, and by comparing and contrasting our results to previously proposed paraphrasing typologies in order to enlarge the possible paraphrasing forms found in our corpus. The fact that the same content can be expressed in many different ways presents a major challenge for Natural Language Processing (NLP) applications. Thus, research on paraphrasing has recently been attracting increasing attention in the fields of NLP and Computational Linguistics. The results obtained in this investigation would be of great interest in many of these applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the mathematical model. Our main conclusion is that mathematical and computational models are good complements for research in social sciences. Indeed, while computational models are extremely useful to extend the scope of the analysis to complex scenarios hard to analyze mathematically, formal models can be useful to verify and to explain the outcomes of computational models.