1000 resultados para Emeishan large igneous province
Resumo:
Objectives: To develop European League Against Rheumatism (EULAR) recommendations for the management of large vessel vasculitis. Methods: An expert group (10 rheumatologists, 3 nephrologists, 2 immunolgists, 2 internists representing 8 European countries and the USA, a clinical epidemiologist and a representative from a drug regulatory agency) identified 10 topics for a systematic literature search through a modified Delphi technique. In accordance with standardised EULAR operating procedures, recommendations were derived for the management of large vessel vasculitis. In the absence of evidence, recommendations were formulated on the basis of a consensus opinion. Results: Seven recommendations were made relating to the assessment, investigation and treatment of patients with large vessel vasculitis. The strength of recommendations was restricted by the low level of evidence and EULAR standardised operating procedures. Conclusions: On the basis of evidence and expert consensus, management recommendations for large vessel vasculitis have been formulated and are commended for use in everyday clinical practice.
Resumo:
Donateur : Bayol, Jean (1849-1905)
Resumo:
The presynaptic plasma membrane (PSPM) of cholinergic nerve terminals was purified from Torpedo electric organ using a large-scale procedure. Up to 500 g of frozen electric organ were fractioned in a single run, leading to the isolation of greater than 100 mg of PSPM proteins. The purity of the fraction is similar to that of the synaptosomal plasma membrane obtained after subfractionation of Torpedo synaptosomes as judged by its membrane-bound acetylcholinesterase activity, the number of Glycera convoluta neurotoxin binding sites, and the binding of two monoclonal antibodies directed against PSPM. The specificity of these antibodies for the PSPM is demonstrated by immunofluorescence microscopy.
Resumo:
This study aimed to investigate the behaviour of two indicators of influenza activity in the area of Barcelona and to evaluate the usefulness of modelling them to improve the detection of influenza epidemics. DESIGN: Descriptive time series study using the number of deaths due to all causes registered by funeral services and reported cases of influenza-like illness. The study concentrated on five influenza seasons, from week 45 of 1988 to week 44 of 1993. The weekly number of deaths and cases of influenza-like illness registered were processed using identification of a time series ARIMA model. SETTING: Six large towns in the Barcelona province which have more than 60,000 inhabitants and funeral services in all of them. MAIN RESULTS: For mortality, the proposed model was an autoregressive one of order 2 (ARIMA (2,0,0)) and for morbidity it was one of order 3 (ARIMA (3,0,0)). Finally, the two time series were analysed together to facilitate the detection of possible implications between them. The joint study of the two series shows that the mortality series can be modelled separately from the reported morbidity series, but the morbidity series is influenced as much by the number of previous cases of influenza reported as by the previous mortality registered. CONCLUSIONS: The model based on general mortality is useful for detecting epidemic activity of influenza. However, because there is not an absolute gold standard that allows definition of the beginning of the epidemic, the final decision of when it is considered an epidemic and control measures recommended should be taken after evaluating all the indicators included in the influenza surveillance programme.
Resumo:
BACKGROUND: Genotypes obtained with commercial SNP arrays have been extensively used in many large case-control or population-based cohorts for SNP-based genome-wide association studies for a multitude of traits. Yet, these genotypes capture only a small fraction of the variance of the studied traits. Genomic structural variants (GSV) such as Copy Number Variation (CNV) may account for part of the missing heritability, but their comprehensive detection requires either next-generation arrays or sequencing. Sophisticated algorithms that infer CNVs by combining the intensities from SNP-probes for the two alleles can already be used to extract a partial view of such GSV from existing data sets. RESULTS: Here we present several advances to facilitate the latter approach. First, we introduce a novel CNV detection method based on a Gaussian Mixture Model. Second, we propose a new algorithm, PCA merge, for combining copy-number profiles from many individuals into consensus regions. We applied both our new methods as well as existing ones to data from 5612 individuals from the CoLaus study who were genotyped on Affymetrix 500K arrays. We developed a number of procedures in order to evaluate the performance of the different methods. This includes comparison with previously published CNVs as well as using a replication sample of 239 individuals, genotyped with Illumina 550K arrays. We also established a new evaluation procedure that employs the fact that related individuals are expected to share their CNVs more frequently than randomly selected individuals. The ability to detect both rare and common CNVs provides a valuable resource that will facilitate association studies exploring potential phenotypic associations with CNVs. CONCLUSION: Our new methodologies for CNV detection and their evaluation will help in extracting additional information from the large amount of SNP-genotyping data on various cohorts and use this to explore structural variants and their impact on complex traits.
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.