88 resultados para software-defined storage
Resumo:
This thesis is a compilation of projects to study sediment processes recharging debris flow channels. These works, conducted during my stay at the University of Lausanne, focus in the geological and morphological implications of torrent catchments to characterize debris supply, a fundamental element to predict debris flows. Other aspects of sediment dynamics are considered, e.g. the coupling headwaters - torrent, as well as the development of a modeling software that simulates sediment transfer in torrent systems. The sediment activity at Manival, an active torrent system of the northern French Alps, was investigated using terrestrial laser scanning and supplemented with geostructural investigations and a survey of sediment transferred in the main torrent. A full year of sediment flux could be observed, which coincided with two debris flows and several bedload transport events. This study revealed that both debris flows generated in the torrent and were preceded in time by recharge of material from the headwaters. Debris production occurred mostly during winter - early spring time and was caused by large slope failures. Sediment transfers were more puzzling, occurring almost exclusively in early spring subordinated to runoffconditions and in autumn during long rainfall. Intense rainstorms in summer did not affect debris storage that seems to rely on the stability of debris deposits. The morpho-geological implication in debris supply was evaluated using DEM and field surveys. A slope angle-based classification of topography could characterize the mode of debris production and transfer. A slope stability analysis derived from the structures in rock mass could assess susceptibility to failure. The modeled rockfall source areas included more than 97% of the recorded events and the sediment budgets appeared to be correlated to the density of potential slope failure. This work showed that the analysis of process-related terrain morphology and of susceptibility to slope failure document the sediment dynamics to quantitatively assess erosion zones leading to debris flow activity. The development of erosional landforms was evaluated by analyzing their geometry with the orientations of potential rock slope failure and with the direction of the maximum joint frequency. Structure in rock mass, but in particular wedge failure and the dominant discontinuities, appear as a first-order control of erosional mechanisms affecting bedrock- dominated catchment. They represent some weaknesses that are exploited primarily by mass wasting processes and erosion, promoting not only the initiation of rock couloirs and gullies, but also their propagation. Incorporating the geological control in geomorphic processes contributes to better understand the landscape evolution of active catchments. A sediment flux algorithm was implemented in a sediment cascade model that discretizes the torrent catchment in channel reaches and individual process-response systems. Each conceptual element includes in simple manner geomorphological and sediment flux information derived from GIS complemented with field mapping. This tool enables to simulate sediment transfers in channels considering evolving debris supply and conveyance, and helps reducing the uncertainty inherent to sediment budget prediction in torrent systems. Cette thèse est un recueil de projets d'études des processus de recharges sédimentaires des chenaux torrentiels. Ces travaux, réalisés lorsque j'étais employé à l'Université de Lausanne, se concentrent sur les implications géologiques et morphologiques des bassins dans l'apport de sédiments, élément fondamental dans la prédiction de laves torrentielles. D'autres aspects de dynamique sédimentaire ont été abordés, p. ex. le couplage torrent - bassin, ainsi qu'un modèle de simulation du transfert sédimentaire en milieu torrentiel. L'activité sédimentaire du Manival, un système torrentiel actif des Alpes françaises, a été étudiée par relevés au laser scanner terrestre et complétée par une étude géostructurale ainsi qu'un suivi du transfert en sédiments du torrent. Une année de flux sédimentaire a pu être observée, coïncidant avec deux laves torrentielles et plusieurs phénomènes de charriages. Cette étude a révélé que les laves s'étaient générées dans le torrent et étaient précédées par une recharge de débris depuis les versants. La production de débris s'est passée principalement en l'hiver - début du printemps, causée par de grandes ruptures de pentes. Le transfert était plus étrange, se produisant presque exclusivement au début du printemps subordonné aux conditions d'écoulement et en automne lors de longues pluies. Les orages d'été n'affectèrent guère les dépôts, qui semblent dépendre de leur stabilité. Les implications morpho-géologiques dans l'apport sédimentaire ont été évaluées à l'aide de MNT et études de terrain. Une classification de la topographie basée sur la pente a permis de charactériser le mode de production et transfert. Une analyse de stabilité de pente à partir des structures de roches a permis d'estimer la susceptibilité à la rupture. Les zones sources modélisées comprennent plus de 97% des chutes de blocs observées et les bilans sédimentaires sont corrélés à la densité de ruptures potentielles. Ce travail d'analyses des morphologies du terrain et de susceptibilité à la rupture documente la dynamique sédimentaire pour l'estimation quantitative des zones érosives induisant l'activité torrentielle. Le développement des formes d'érosion a été évalué par l'analyse de leur géométrie avec celle des ruptures potentielles et avec la direction de la fréquence maximale des joints. Les structures de roches, mais en particulier les dièdres et les discontinuités dominantes, semblent être très influents dans les mécanismes d'érosion affectant les bassins rocheux. Ils représentent des zones de faiblesse exploitées en priorité par les processus de démantèlement et d'érosion, encourageant l'initiation de ravines et couloirs, mais aussi leur propagation. L'incorporation du control géologique dans les processus de surface contribue à une meilleure compréhension de l'évolution topographique de bassins actifs. Un algorithme de flux sédimentaire a été implémenté dans un modèle en cascade, lequel divise le bassin en biefs et en systèmes individuels répondant aux processus. Chaque unité inclut de façon simple les informations géomorpologiques et celles du flux sédimentaire dérivées à partir de SIG et de cartographie de terrain. Cet outil permet la simulation des transferts de masse dans les chenaux, considérants la variabilité de l'apport et son transport, et aide à réduire l'incertitude liée à la prédiction de bilans sédimentaires torrentiels. Ce travail vise très humblement d'éclairer quelques aspects de la dynamique sédimentaire en milieu torrentiel.
Resumo:
The aim of this study was to determine the effect of using video analysis software on the interrater reliability of visual assessments of gait videos in children with cerebral palsy. Two clinicians viewed the same random selection of 20 sagittal and frontal video recordings of 12 children with cerebral palsy routinely acquired during outpatient rehabilitation clinics. Both observers rated these videos in a random sequence for each lower limb using the Observational Gait Scale, once with standard video software and another with video analysis software (Dartfish(®)) which can perform angle and timing measurements. The video analysis software improved interrater agreement, measured by weighted Cohen's kappas, for the total score (κ 0.778→0.809) and all of the items that required angle and/or timing measurements (knee position mid-stance κ 0.344→0.591; hindfoot position mid-stance κ 0.160→0.346; foot contact mid-stance κ 0.700→0.854; timing of heel rise κ 0.769→0.835). The use of video analysis software is an efficient approach to improve the reliability of visual video assessments.
Resumo:
BACKGROUND: Today, recognition and classification of sequence motifs and protein folds is a mature field, thanks to the availability of numerous comprehensive and easy to use software packages and web-based services. Recognition of structural motifs, by comparison, is less well developed and much less frequently used, possibly due to a lack of easily accessible and easy to use software. RESULTS: In this paper, we describe an extension of DeepView/Swiss-PdbViewer through which structural motifs may be defined and searched for in large protein structure databases, and we show that common structural motifs involved in stabilizing protein folds are present in evolutionarily and structurally unrelated proteins, also in deeply buried locations which are not obviously related to protein function. CONCLUSIONS: The possibility to define custom motifs and search for their occurrence in other proteins permits the identification of recurrent arrangements of residues that could have structural implications. The possibility to do so without having to maintain a complex software/hardware installation on site brings this technology to experts and non-experts alike.
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Resumo:
In recent years, protein-ligand docking has become a powerful tool for drug development. Although several approaches suitable for high throughput screening are available, there is a need for methods able to identify binding modes with high accuracy. This accuracy is essential to reliably compute the binding free energy of the ligand. Such methods are needed when the binding mode of lead compounds is not determined experimentally but is needed for structure-based lead optimization. We present here a new docking software, called EADock, that aims at this goal. It uses an hybrid evolutionary algorithm with two fitness functions, in combination with a sophisticated management of the diversity. EADock is interfaced with the CHARMM package for energy calculations and coordinate handling. A validation was carried out on 37 crystallized protein-ligand complexes featuring 11 different proteins. The search space was defined as a sphere of 15 A around the center of mass of the ligand position in the crystal structure, and on the contrary to other benchmarks, our algorithm was fed with optimized ligand positions up to 10 A root mean square deviation (RMSD) from the crystal structure, excluding the latter. This validation illustrates the efficiency of our sampling strategy, as correct binding modes, defined by a RMSD to the crystal structure lower than 2 A, were identified and ranked first for 68% of the complexes. The success rate increases to 78% when considering the five best ranked clusters, and 92% when all clusters present in the last generation are taken into account. Most failures could be explained by the presence of crystal contacts in the experimental structure. Finally, the ability of EADock to accurately predict binding modes on a real application was illustrated by the successful docking of the RGD cyclic pentapeptide on the alphaVbeta3 integrin, starting far away from the binding pocket.
Resumo:
Over the past decade a series of trials of the EORTC Brain Tumor Group (BTG) has substantially influenced and shaped the standard-of-care of primary brain tumors. All these trials were coupled with biological research that has allowed for better understanding of the biology of these tumors. In glioblastoma, EORTC trial 26981/22981 conducted jointly with the National Cancer Institute of Canada Clinical Trials Group showed superiority of concomitant radiochemotherapy with temozolomide over radiotherapy alone. It also identified the first predictive marker for benefit from alkylating agent chemotherapy in glioblastoma, the methylation of the O6-methyl-guanyl-methly-transferase (MGMT) gene promoter. In another large randomized trial, EORTC 26951, adjuvant chemotherapy in anaplastic oligodendroglial tumors was investigated. Despite an improvement in progression-free survival this did not translate into a survival benefit. The third example of a landmark trial is the EORTC 22845 trial. This trial led by the EORTC Radiation Oncology Group forms the basis for an expectative approach to patients with low-grade glioma, as early radiotherapy indeed prolongs time to tumor progression but with no benefit in overall survival. This trial is the key reference in deciding at what time in their disease adult patients with low-grade glioma should be irradiated. Future initiatives will continue to focus on the conduct of controlled trials, rational academic drug development as well as systematic evaluation of tumor tissue including biomarker development for personalized therapy. Important lessons learned in neurooncology are to dare to ask real questions rather than merely rapidly testing new compounds, and the value of well designed trials, including the presence of controls, central pathology review, strict radiology protocols and biobanking. Structurally, the EORTC BTG has evolved into a multidisciplinary group with strong transatlantic alliances. It has contributed to the maturation of neurooncology within the oncological sciences.
Resumo:
The growth rate of acoustic tumors, although slow, varies widely. There may be a continuous spectrum or distinct groups of tumor growth rates. Clinical, audiologic, and conventional histologic tests have failed to shed any light on this problem. Modern immunohistochemical methods may stand a better chance. The Ki-67 monoclonal antibody stains proliferating cells and is used in this study to investigate the growth fraction of 13 skull base schwannomas. The acoustic tumors can be divided into two different growth groups, one with a rate five times the other. The literature is reviewed to see if this differentiation is borne out by the radiologic studies. Distinct growth rates have been reported: one very slow, taking 50 years to reach 1 cm in diameter, a second rate with a diameter increase of 0.2 cm/year, and a third rate five times the second, with a 1.0 cm increase in diameter per year. A fourth group growing at 2.5 cm/year is postulated, but these tumors cannot be followed for long radiologically, since symptoms demand surgical intervention. The clinical implications of these separate growth rates are discussed.
Resumo:
A novel melanoma-associated differentiation Ag whose surface expression can be enhanced or induced by IFN-gamma was identified by mAb Me14/D12. Testing of numerous tumor cell lines and tumor tissue sections showed that Me14/D12-defined Ag was present not only on melanoma but also on other tumor lines of neuroectodermal origin such as gliomas and neuroblastomas and on some lymphoblastic B cell lines, on monocytes and macrophages. Immunoprecipitation by mAb Me14/D12 of lysates from [35S]methionine-labeled melanoma cells analyzed by SDS-PAGE revealed two polypeptide chains of 33 and 38 KDa, both under reducing and nonreducing conditions. Cross-linking experiments indicated that the two chains were present at the cell surface as a dimeric structure. Two-dimensional gel electrophoresis showed that the two chains of 33 and 38 KDa had isoelectric points of 6.2 and 5.7, respectively. Treatment of the melanoma cells with tunicamycin, an inhibitor of N-linked glycosylation, resulted in a reduction of the Mr from 33 to 24 KDa and from 38 to 26 KDa. Peptide maps obtained after Staphylococcus aureus V8 protease digestion showed no shared peptides between the two chains. Although biochemical data indicate that Me14/D12 molecules do not correspond to any known MHC class II Ag, their dimeric structure, tissue distribution, and regulation of IFN-gamma suggest that they could represent a new member of the MHC class II family.
Resumo:
The Gene Ontology (GO) Consortium (http://www.geneontology.org) (GOC) continues to develop, maintain and use a set of structured, controlled vocabularies for the annotation of genes, gene products and sequences. The GO ontologies are expanding both in content and in structure. Several new relationship types have been introduced and used, along with existing relationships, to create links between and within the GO domains. These improve the representation of biology, facilitate querying, and allow GO developers to systematically check for and correct inconsistencies within the GO. Gene product annotation using GO continues to increase both in the number of total annotations and in species coverage. GO tools, such as OBO-Edit, an ontology-editing tool, and AmiGO, the GOC ontology browser, have seen major improvements in functionality, speed and ease of use.
Resumo:
Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.
Resumo:
INTRODUCTION: Video records are widely used to analyze performance in alpine skiing at professional or amateur level. Parts of these analyses require the labeling of some movements (i.e. determining when specific events occur). If differences among coaches and differences for the same coach between different dates are expected, they have never been quantified. Moreover, knowing these differences is essential to determine which parameters reliable should be used. This study aimed to quantify the precision and the repeatability for alpine skiing coaches of various levels, as it is done in other fields (Koo et al, 2005). METHODS: A software similar to commercialized products was designed to allow video analyses. 15 coaches divided into 3 groups (5 amateur coaches (G1), 5 professional instructors (G2) and 5 semi-professional coaches (G3)) were enrolled. They were asked to label 15 timing parameters (TP) according to the Swiss ski manual (Terribilini et al, 2001) for each curve. TP included phases (initiation, steering I-II), body and ski movements (e.g. rotation, weighting, extension, balance). Three video sequences sampled at 25 Hz were used and one curve per video was labeled. The first video was used to familiarize the analyzer to the software. The two other videos, corresponding to slalom and giant slalom, were considered for the analysis. G1 realized twice the analysis (A1 and A2) at different dates and TP were randomized between both analyses. Reference TP were considered as the median of G2 and G3 at A1. The precision was defined as the RMS difference between individual TP and reference TP, whereas the repeatability was calculated as the RMS difference between individual TP at A1 and at A2. RESULTS AND DISCUSSION: For G1, G2 and G3, a precision of +/-5.6 frames, +/-3.0 and +/-2.0 frames, was respectively obtained. These results showed that G2 was more precise than G1, and G3 more precise than G2, were in accordance with group levels. The repeatability for G1 was +/-3.1 frames. Furthermore, differences among TP precision were observed, considering G2 and G3, with largest differences of +/-5.9 frames for "body counter rotation movement in steering phase II", and of 0.8 frame for "ski unweighting in initiation phase". CONCLUSION: This study quantified coach ability to label video in term of precision and repeatability. The best precision was obtained for G3 and was of +/-0.08s, which corresponds to +/-6.5% of the curve cycle. Regarding the repeatability, we obtained a result of +/-0.12s for G1, corresponding to +/-12% of the curve cycle. The repeatability of G2 and G3 are expected to be lower than the precision of G1 and the corresponding repeatability will be assessed soon. In conclusion, our results indicate that the labeling of video records is reliable for some TP, whereas caution is required for others. REFERENCES Koo S, Gold MD, Andriacchi TP. (2005). Osteoarthritis, 13, 782-789. Terribilini M, et al. (2001). Swiss Ski manual, 29-46. IASS, Lucerne.
Resumo:
The broad aim of biomedical science in the postgenomic era is to link genomic and phenotype information to allow deeper understanding of the processes leading from genomic changes to altered phenotype and disease. The EuroPhenome project (http://www.EuroPhenome.org) is a comprehensive resource for raw and annotated high-throughput phenotyping data arising from projects such as EUMODIC. EUMODIC is gathering data from the EMPReSSslim pipeline (http://www.empress.har.mrc.ac.uk/) which is performed on inbred mouse strains and knock-out lines arising from the EUCOMM project. The EuroPhenome interface allows the user to access the data via the phenotype or genotype. It also allows the user to access the data in a variety of ways, including graphical display, statistical analysis and access to the raw data via web services. The raw phenotyping data captured in EuroPhenome is annotated by an annotation pipeline which automatically identifies statistically different mutants from the appropriate baseline and assigns ontology terms for that specific test. Mutant phenotypes can be quickly identified using two EuroPhenome tools: PhenoMap, a graphical representation of statistically relevant phenotypes, and mining for a mutant using ontology terms. To assist with data definition and cross-database comparisons, phenotype data is annotated using combinations of terms from biological ontologies.
Resumo:
The metabolic balance method was performed on three men to investigate the fate of large excesses of carbohydrate. Glycogen stores, which were first depleted by diet (3 d, 8.35 +/- 0.27 MJ [1994 +/- 65 kcal] decreasing to 5.70 +/- 1.03 MJ [1361 +/- 247 kcal], 15% protein, 75% fat, 10% carbohydrate) and exercise, were repleted during 7 d carbohydrate overfeeding (11% protein, 3% fat, and 86% carbohydrate) providing 15.25 +/- 1.10 MJ (3642 +/- 263 kcal) on the first day, increasing progressively to 20.64 +/- 1.30 MJ (4930 +/- 311 kcal) on the last day of overfeeding. Glycogen depletion was again accomplished with 2 d of carbohydrate restriction (2.52 MJ/d [602 kcal/d], 85% protein, and 15% fat). Glycogen storage capacity in man is approximately 15 g/kg body weight and can accommodate a gain of approximately 500 g before net lipid synthesis contributes to increasing body fat mass. When the glycogen stores are saturated, massive intakes of carbohydrate are disposed of by high carbohydrate-oxidation rates and substantial de novo lipid synthesis (150 g lipid/d using approximately 475 g CHO/d) without postabsorptive hyperglycemia.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.