998 resultados para large-eddy simualtion
Resumo:
Large animal models are an important resource for the understanding of human disease and for evaluating the applicability of new therapies to human patients. For many diseases, such as cone dystrophy, research effort is hampered by the lack of such models. Lentiviral transgenesis is a methodology broadly applicable to animals from many different species. When conjugated to the expression of a dominant mutant protein, this technology offers an attractive approach to generate new large animal models in a heterogeneous background. We adopted this strategy to mimic the phenotype diversity encounter in humans and generate a cohort of pigs for cone dystrophy by expressing a dominant mutant allele of the guanylate cyclase 2D (GUCY2D) gene. Sixty percent of the piglets were transgenic, with mutant GUCY2D mRNA detected in the retina of all animals tested. Functional impairment of vision was observed among the transgenic pigs at 3 months of age, with a follow-up at 1 year indicating a subsequent slower progression of phenotype. Abnormal retina morphology, notably among the cone photoreceptor cell population, was observed exclusively amongst the transgenic animals. Of particular note, these transgenic animals were characterized by a range in the severity of the phenotype, reflecting the human clinical situation. We demonstrate that a transgenic approach using lentiviral vectors offers a powerful tool for large animal model development. Not only is the efficiency of transgenesis higher than conventional transgenic methodology but this technique also produces a heterogeneous cohort of transgenic animals that mimics the genetic variation encountered in human patients.
Resumo:
MOTIVATION: Analysis of millions of pyro-sequences is currently playing a crucial role in the advance of environmental microbiology. Taxonomy-independent, i.e. unsupervised, clustering of these sequences is essential for the definition of Operational Taxonomic Units. For this application, reproducibility and robustness should be the most sought after qualities, but have thus far largely been overlooked. RESULTS: More than 1 million hyper-variable internal transcribed spacer 1 (ITS1) sequences of fungal origin have been analyzed. The ITS1 sequences were first properly extracted from 454 reads using generalized profiles. Then, otupipe, cd-hit-454, ESPRIT-Tree and DBC454, a new algorithm presented here, were used to analyze the sequences. A numerical assay was developed to measure the reproducibility and robustness of these algorithms. DBC454 was the most robust, closely followed by ESPRIT-Tree. DBC454 features density-based hierarchical clustering, which complements the other methods by providing insights into the structure of the data. AVAILABILITY: An executable is freely available for non-commercial users at ftp://ftp.vital-it.ch/tools/dbc454. It is designed to run under MPI on a cluster of 64-bit Linux machines running Red Hat 4.x, or on a multi-core OSX system. CONTACT: dbc454@vital-it.ch or nicolas.guex@isb-sib.ch.
Resumo:
For the last 2 decades, supertree reconstruction has been an active field of research and has seen the development of a large number of major algorithms. Because of the growing popularity of the supertree methods, it has become necessary to evaluate the performance of these algorithms to determine which are the best options (especially with regard to the supermatrix approach that is widely used). In this study, seven of the most commonly used supertree methods are investigated by using a large empirical data set (in terms of number of taxa and molecular markers) from the worldwide flowering plant family Sapindaceae. Supertree methods were evaluated using several criteria: similarity of the supertrees with the input trees, similarity between the supertrees and the total evidence tree, level of resolution of the supertree and computational time required by the algorithm. Additional analyses were also conducted on a reduced data set to test if the performance levels were affected by the heuristic searches rather than the algorithms themselves. Based on our results, two main groups of supertree methods were identified: on one hand, the matrix representation with parsimony (MRP), MinFlip, and MinCut methods performed well according to our criteria, whereas the average consensus, split fit, and most similar supertree methods showed a poorer performance or at least did not behave the same way as the total evidence tree. Results for the super distance matrix, that is, the most recent approach tested here, were promising with at least one derived method performing as well as MRP, MinFlip, and MinCut. The output of each method was only slightly improved when applied to the reduced data set, suggesting a correct behavior of the heuristic searches and a relatively low sensitivity of the algorithms to data set sizes and missing data. Results also showed that the MRP analyses could reach a high level of quality even when using a simple heuristic search strategy, with the exception of MRP with Purvis coding scheme and reversible parsimony. The future of supertrees lies in the implementation of a standardized heuristic search for all methods and the increase in computing power to handle large data sets. The latter would prove to be particularly useful for promising approaches such as the maximum quartet fit method that yet requires substantial computing power.
Resumo:
Objectives: To develop European League Against Rheumatism (EULAR) recommendations for the management of large vessel vasculitis. Methods: An expert group (10 rheumatologists, 3 nephrologists, 2 immunolgists, 2 internists representing 8 European countries and the USA, a clinical epidemiologist and a representative from a drug regulatory agency) identified 10 topics for a systematic literature search through a modified Delphi technique. In accordance with standardised EULAR operating procedures, recommendations were derived for the management of large vessel vasculitis. In the absence of evidence, recommendations were formulated on the basis of a consensus opinion. Results: Seven recommendations were made relating to the assessment, investigation and treatment of patients with large vessel vasculitis. The strength of recommendations was restricted by the low level of evidence and EULAR standardised operating procedures. Conclusions: On the basis of evidence and expert consensus, management recommendations for large vessel vasculitis have been formulated and are commended for use in everyday clinical practice.
Resumo:
The presynaptic plasma membrane (PSPM) of cholinergic nerve terminals was purified from Torpedo electric organ using a large-scale procedure. Up to 500 g of frozen electric organ were fractioned in a single run, leading to the isolation of greater than 100 mg of PSPM proteins. The purity of the fraction is similar to that of the synaptosomal plasma membrane obtained after subfractionation of Torpedo synaptosomes as judged by its membrane-bound acetylcholinesterase activity, the number of Glycera convoluta neurotoxin binding sites, and the binding of two monoclonal antibodies directed against PSPM. The specificity of these antibodies for the PSPM is demonstrated by immunofluorescence microscopy.
Resumo:
BACKGROUND: Genotypes obtained with commercial SNP arrays have been extensively used in many large case-control or population-based cohorts for SNP-based genome-wide association studies for a multitude of traits. Yet, these genotypes capture only a small fraction of the variance of the studied traits. Genomic structural variants (GSV) such as Copy Number Variation (CNV) may account for part of the missing heritability, but their comprehensive detection requires either next-generation arrays or sequencing. Sophisticated algorithms that infer CNVs by combining the intensities from SNP-probes for the two alleles can already be used to extract a partial view of such GSV from existing data sets. RESULTS: Here we present several advances to facilitate the latter approach. First, we introduce a novel CNV detection method based on a Gaussian Mixture Model. Second, we propose a new algorithm, PCA merge, for combining copy-number profiles from many individuals into consensus regions. We applied both our new methods as well as existing ones to data from 5612 individuals from the CoLaus study who were genotyped on Affymetrix 500K arrays. We developed a number of procedures in order to evaluate the performance of the different methods. This includes comparison with previously published CNVs as well as using a replication sample of 239 individuals, genotyped with Illumina 550K arrays. We also established a new evaluation procedure that employs the fact that related individuals are expected to share their CNVs more frequently than randomly selected individuals. The ability to detect both rare and common CNVs provides a valuable resource that will facilitate association studies exploring potential phenotypic associations with CNVs. CONCLUSION: Our new methodologies for CNV detection and their evaluation will help in extracting additional information from the large amount of SNP-genotyping data on various cohorts and use this to explore structural variants and their impact on complex traits.
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.