883 resultados para crittografia, mixnet, EasyCrypt, game-based proofs,sequence of games, computation-aided proofs


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interaction of glucose/mannose-binding lectins in solution with immobilized glycoproteins was followed in real time using surface plasmon resonance technology. The lectins which share many biochemical and structural features could be clearly differentiated in terms of their specificity for complex glycoconjugates. The most prominent interaction of the lectins with PHA-E comparing with soybean agglutinin, both glycoproteins exhibiting high mannose oligosaccharides, suggests that the whole structure of the glycoproteins themselves, may interfere in affinity. These findings also support the hypothesis that minor amino acid replacements in the primary sequence of the lectins might be responsible for their divergence in fine specificity and biological activities. This is the first report using surface plasmon resonance technology that evidences differences of Diocleinae lectins in respect their fine glycan-specificity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: A number of studies have used protein interaction data alone for protein function prediction. Here, we introduce a computational approach for annotation of enzymes, based on the observation that similar protein sequences are more likely to perform the same function if they share similar interacting partners. Results: The method has been tested against the PSI-BLAST program using a set of 3,890 protein sequences from which interaction data was available. For protein sequences that align with at least 40% sequence identity to a known enzyme, the specificity of our method in predicting the first three EC digits increased from 80% to 90% at 80% coverage when compared to PSI-BLAST. Conclusion: Our method can also be used in proteins for which homologous sequences with known interacting partners can be detected. Thus, our method could increase 10% the specificity of genome-wide enzyme predictions based on sequence matching by PSI-BLAST alone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a Computer-Supported Collaborative Learning (CSCL) case study in engineering education carried out within the context of a network management course. The case study shows that the use of two computing tools developed by the authors and based on Free- and Open-Source Software (FOSS) provide significant educational benefits over traditional engineering pedagogical approaches in terms of both concepts and engineering competencies acquisition. First, the Collage authoring tool guides and supports the course teacher in the process of authoring computer-interpretable representations (using the IMS Learning Design standard notation) of effective collaborative pedagogical designs. Besides, the Gridcole system supports the enactment of that design by guiding the students throughout the prescribed sequence of learning activities. The paper introduces the goals and context of the case study, elaborates onhow Collage and Gridcole were employed, describes the applied evaluation methodology, anddiscusses the most significant findings derived from the case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Genetic variation at the melanocortin-1 receptor (MC1R) gene is correlated with melanin color variation in many birds. Feral pigeons (Columba livia) show two major melanin-based colorations: a red coloration due to pheomelanic pigment and a black coloration due to eumelanic pigment. Furthermore, within each color type, feral pigeons display continuous variation in the amount of melanin pigment present in the feathers, with individuals varying from pure white to a full dark melanic color. Coloration is highly heritable and it has been suggested that it is under natural or sexual selection, or both. Our objective was to investigate whether MC1R allelic variants are associated with plumage color in feral pigeons.We sequenced 888 bp of the coding sequence of MC1R among pigeons varying both in the type, eumelanin or pheomelanin, and the amount of melanin in their feathers. We detected 10 non-synonymous substitutions and 2 synonymous substitution but none of them were associated with a plumage type. It remains possible that non-synonymous substitutions that influence coloration are present in the short MC1R fragment that we did not sequence but this seems unlikely because we analyzed the entire functionally important region of the gene.Our results show that color differences among feral pigeons are probably not attributable to amino acid variation at the MC1R locus. Therefore, variation in regulatory regions of MC1R or variation in other genes may be responsible for the color polymorphism of feral pigeons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mass spectrometry (MS) is currently the most sensitive and selective analytical technique for routine peptide and protein structure analysis. Top-down proteomics is based on tandem mass spectrometry (MS/ MS) of intact proteins, where multiply charged precursor ions are fragmented in the gas phase, typically by electron transfer or electron capture dissociation, to yield sequence-specific fragment ions. This approach is primarily used for the study of protein isoforms, including localization of post-translational modifications and identification of splice variants. Bottom-up proteomics is utilized for routine high-throughput protein identification and quantitation from complex biological samples. The proteins are first enzymatically digested into small (usually less than ca. 3 kDa) peptides, these are identified by MS or MS/MS, usually employing collisional activation techniques. To overcome the limitations of these approaches while combining their benefits, middle-down proteomics has recently emerged. Here, the proteins are digested into long (3-15 kDa) peptides via restricted proteolysis followed by the MS/MS analysis of the obtained digest. With advancements of high-resolution MS and allied techniques, routine implementation of the middle-down approach has been made possible. Herein, we present the liquid chromatography (LC)-MS/MS-based experimental design of our middle-down proteomic workflow coupled with post-LC supercharging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We sought to assess the feasibility and reproducibility of performing tissue-based immune characterization of the tumor microenvironment using CT-compatible needle biopsy material. Three independent biopsies were obtained intraoperatively from one metastatic epithelial ovarian cancer lesion of 7 consecutive patients undergoing surgical cytoreduction using a 16-gauge core biopsy needle. Core specimens were snap-frozen and subjected to immunohistochemistry (IHC) against human CD3, CD4, CD8, and FoxP3. A portion of the cores was used to isolate RNA for 1) real-time quantitative (q)PCR for CD3, CD4, CD8, FoxP3, IL-10 and TGF-beta, 2) multiplexed PCR-based T cell receptor (TCR) CDR3 Vβ region spectratyping, and 3) gene expression profiling. Pearson's correlations were examined for immunohistochemistry and PCR gene expression, as well as for gene expression array data obtained from different tumor biopsies. Needle biopsy yielded sufficient tissue for all assays in all patients. IHC was highly reproducible and informative. Significant correlations were seen between the frequency of CD3+, CD8+ and FoxP3+ T cells by IHC with CD3ε, CD8A, and FoxP3 gene expression, respectively, by qPCR (r=0.61, 0.86, and 0.89; all p< 0.05). CDR3 spectratyping was feasible and highly reproducible in each tumor, and indicated a restricted repertoire for specific TCR Vβ chains in tumor-infiltrating T cells. Microarray gene expression revealed strong correlation between different biopsies collected from the same tumor. Our results demonstrate a feasible and reproducible method of immune monitoring using CT-compatible needle biopsies from tumor tissue, thereby paving the way for sophisticated translational studies during tumor biological therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10,000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10 000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the last few decades, the videogame has become an important media, economic, and cultural phenomenon. Along with the phenomenon’s proliferation the aspects that constitute its identity have become more and more challenging to determine, however. The persistent surfacing of novel ludic forms continues to expand the conceptual range of ‘games’ and ‘videogames,’ which has already lead to anxious generalizations within academic as well as popular discourses. Such generalizations make it increasingly difficult to comprehend how the instances of this phenomenon actually work, which in turn generates pragmatic problems: the lack of an applicable identification of the videogame hinders its study, play, and everyday conceptualization. To counteract these problems this dissertation establishes a geneontological research methodology that enables the identification of the videogame in relation to its cultural surroundings. Videogames are theorized as ‘games,’ ‘puzzles,’ ‘stories,’ and ‘aesthetic artifacts’ (or ‘artworks’), which produces a geneontological sequence of the videogame as a singular species of culture, Artefactum ludus ludus, or ludom for short. According to this sequence, the videogame’s position as a ‘game’ in the historicized evolution of culture is mainly metaphorical, while at the same time its artifactuality, dynamic system structure, time-critical strategic input requirements and aporetically rhematic aesthetics allow it to be discovered as a conceptually stable but empirically transient uniexistential phenomenon that currently thrivesbut may soon die out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The healthcare sector is currently in the verge of a reform and thus, the medical game research provide an interesting area of research. The aim of this study is to explore the critical elements underpinning the emergence of the medical game ecosystem with three sub-objectives: (1) to seek who are the key actors involved in the medical game ecosystem and identify their needs, (2) to scrutinise what types of resources are required in medical game development and what types of relationships are needed to secure those resources, and (3) to identify the existing institutions (‘the rules of the game’) affecting the emergence of the medical game ecosystem. The theoretical background consists of service ecosystems literature. The empirical study conducted is based on the semi-structured theme interviews of 25 experts in three relevant fields: games and technology, health and funding. The data was analysed through a theoretical framework that was designed based upon service ecosystems literature. The study proposes that the key actors are divided into five groups: medical game companies, customers, funders, regulatory parties and complementors. Their needs are linked to improving patient motivation and enhancing the healthcare processes resulting in lower costs. Several types of resources, especially skills and knowledge, are required to create a medical game. To gain access to those resources, medical game companies need to build complex networks of relationships. Proficiency in managing those value networks is crucial. In addition, the company should take into account the underlying institutions in the healthcare sector affecting the medical game ecosystem. Three crucial institutions were identified: validation, lack of innovation supporting structures in healthcare and the rising consumerisation. Based on the findings, medical games cannot be made in isolation. A developmental trajectory model of the emerging medical game ecosystem was created based on the empirical data. The relevancy of relationships and resources is dependent on the trajectory that the medical game company at that time resides. Furthermore, creating an official and documented database for clinically valdated medical games was proposed to establish the medical game market and ensure an adequate status for the effective medical games. Finally, ecosystems approach provides interesting future opportunities for research on medical game ecosystems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The healthcare sector is currently in the verge of a reform and thus, the medical game research provide an interesting area of research. The aim of this study is to explore the critical elements underpinning the emergence of the medical game ecosystem with three sub-objectives: (1) to seek who are the key actors involved in the medical game ecosystem and identify their needs, (2) to scrutinise what types of resources are required in medical game development and what types of relationships are needed to secure those resources, and (3) to identify the existing institutions (‘the rules of the game’) affecting the emergence of the medical game ecosystem. The theoretical background consists of service ecosystems literature. The empirical study conducted is based on the semi-structured theme interviews of 25 experts in three relevant fields: games and technology, health and funding. The data was analysed through a theoretical framework that was designed based upon service ecosystems literature. The study proposes that the key actors are divided into five groups: medical game companies, customers, funders, regulatory parties and complementors. Their needs are linked to improving patient motivation and enhancing the healthcare processes resulting in lower costs. Several types of resources, especially skills and knowledge, are required to create a medical game. To gain access to those resources, medical game companies need to build complex networks of relationships. Proficiency in managing those value networks is crucial. In addition, the company should take into account the underlying institutions in the healthcare sector affecting the medical game ecosystem. Three crucial institutions were identified: validation, lack of innovation supporting structures in healthcare and the rising consumerisation. Based on the findings, medical games cannot be made in isolation. A developmental trajectory model of the emerging medical game ecosystem was created based on the empirical data. The relevancy of relationships and resources is dependent on the trajectory that the medical game company at that time resides. Furthermore, creating an official and documented database for clinically validated medical games was proposed to establish the medical game market and ensure an adequate status for the effective medical games. Finally, ecosystems approach provides interesting future opportunities for research on medical game ecosystems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ABSTRACT Recombinant adenoviruses are currently under intense investigation as potential gene delivery and gene expression vectors with applications in human and veterinary medicine. As part of our efforts to develop a bovine adenovirus type 2 (BAV2) based vector system, the nucleotide sequence of BAV2 was determined. Sixty-six open reading frames (ORFs) were found with the potential to encode polypeptides that were at least 50 amino acid (aa) residue long. Thirty-one of the BAV2 polypeptide sequences were found to share homology to already identified adenovirus proteins. The arrangement of the genes revealed that the BAV2 genomic organization closely resembles that of well-characterized human adenoviruses. In the course of this study, continuous propagation of BAV2 over many generations in cell culture resulted in the isolation of a BAV2 spontaneous mutant in which the E3 region was deleted. Restriction enzyme, sequencing and PCR analyses produced concordant results that precisely located the deletion and revealed that its size was exactly 1299 bp. The E3-deleted virus was plaque-purified and further propagated in cell culture. It appeared that the replication of such a virus lacking a portion of the E3 region was not affected, at least in cell culture. Attempts to rescue a recombinant BAV2 virus with the bacterial kanamycin resistance gene in the E3 region yielded a candidate as verified with extensive Southern blotting and PCR analyses. Attempts to purify the recombinant virus were not successful, suggesting that such recombinant BAV2 was helper-dependent. Ten clones containing full-length BAV2 genomes in a pWE15 cosmid vector were constructed. The infectivity of these constructs was tested by using different transfection methods. The BAV2 genomic clones did appear to be infectious only after extended incubation period. This may be due to limitations of various transfection methods tested, or biological differences between virus- and E. co//-derived BAV2 DNA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A feature-based fitness function is applied in a genetic programming system to synthesize stochastic gene regulatory network models whose behaviour is defined by a time course of protein expression levels. Typically, when targeting time series data, the fitness function is based on a sum-of-errors involving the values of the fluctuating signal. While this approach is successful in many instances, its performance can deteriorate in the presence of noise. This thesis explores a fitness measure determined from a set of statistical features characterizing the time series' sequence of values, rather than the actual values themselves. Through a series of experiments involving symbolic regression with added noise and gene regulatory network models based on the stochastic 'if-calculus, it is shown to successfully target oscillating and non-oscillating signals. This practical and versatile fitness function offers an alternate approach, worthy of consideration for use in algorithms that evaluate noisy or stochastic behaviour.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The central thesis of this report is that human language is NP-complete. That is, the process of comprehending and producing utterances is bounded above by the class NP, and below by NP-hardness. This constructive complexity thesis has two empirical consequences. The first is to predict that a linguistic theory outside NP is unnaturally powerful. The second is to predict that a linguistic theory easier than NP-hard is descriptively inadequate. To prove the lower bound, I show that the following three subproblems of language comprehension are all NP-hard: decide whether a given sound is possible sound of a given language; disambiguate a sequence of words; and compute the antecedents of pronouns. The proofs are based directly on the empirical facts of the language user's knowledge, under an appropriate idealization. Therefore, they are invariant across linguistic theories. (For this reason, no knowledge of linguistic theory is needed to understand the proofs, only knowledge of English.) To illustrate the usefulness of the upper bound, I show that two widely-accepted analyses of the language user's knowledge (of syntactic ellipsis and phonological dependencies) lead to complexity outside of NP (PSPACE-hard and Undecidable, respectively). Next, guided by the complexity proofs, I construct alternate linguisitic analyses that are strictly superior on descriptive grounds, as well as being less complex computationally (in NP). The report also presents a new framework for linguistic theorizing, that resolves important puzzles in generative linguistics, and guides the mathematical investigation of human language.